The Research Consumer Reviews the Measures

Slides:



Advertisements
Similar presentations
TREATMENT PLAN REQUIREMENTS
Advertisements

CHAPTER 9, survey research
MU Center for SW-PBS College of Education University of Missouri Behavior Intervention Plans: Monitoring Student Progress.
Increasing your confidence that you really found what you think you found. Reliability and Validity.
Program Evaluation and Measurement Janet Myers. Objectives for today… To define and explain concepts and terms used in program evaluation. To understand.
Designing the Questionnaire
Measurement in Marketing Research
Consistency of Assessment
Delmar Learning Copyright © 2003 Delmar Learning, a Thomson Learning company Nursing Leadership & Management Patricia Kelly-Heidenthal
© 2004 Prentice-Hall, Inc.Chap 1-1 Basic Business Statistics (9 th Edition) Chapter 1 Introduction and Data Collection.
Chapter 41 Training for Organizations Research Skills.
Research Ethics Levels of Measurement. Ethical Issues Include: Anonymity – researcher does not know who participated or is not able to match the response.
Formative and Summative Evaluations
Program Evaluation Spero Manson PhD
Observing Learning Helen Bacon and Jan Ridgway Inclusion Support Services.
Assessment Strategies Visual Tools EDUC 4454 P/J Methods.
UNDERSTANDING, PLANNING AND PREPARING FOR THE SCHOOL-WIDE EVALUATION TOOL (SET)
Classroom Assessment A Practical Guide for Educators by Craig A. Mertler Chapter 5 Informal Assessments.
Study announcement if you are interested!. Questions  Is there one type of mixed design that is more common than the other types?  Even though there.
The Multidisciplinary Team Testing Considerations, and Parental Participation in the Assessment Process Chapter Seven.
Chapter 4 Principles of Quantitative Research. Answering Questions  Quantitative Research attempts to answer questions by ascribing importance (significance)
Principles of Assessment
Professional Portfolios
Questionnaires and Interviews
Slide 1 © 2005 The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill Slide 1 Chapter Two SOCIOLOGY Diversity, Conflict, and Change Research.
Chapter 3: Marketing Intelligence Copyright © 2010 Pearson Education Canada1.
Instrumentation.
ASSESSMENT OF STUDENT LEARNING Manal bait Gharim.
Classroom Assessments Checklists, Rating Scales, and Rubrics
Elizabeth Godfrey 1.  Periodic assessment of results Appropriateness, effectiveness, efficiency, impact, sustainability  Identifies intended and unintended.
The World of Assessment Consider the options! Scores based on developmental levels of academic achievement Age-Equivalent scores.
Descriptive and Causal Research Designs
Creating Questionnaires. Learning outcomes Upon completion, students will be able to: Identify the difference between quantitative and qualitative data.
1 Collecting primary data: questionnaires Week 7 lecture 2.
Data Collection Methods
Chapter 29 conducting marketing research Section 29.1
Study of the day Misattribution of arousal (Dutton & Aron, 1974)
Evaluating HRD Programs
Major Research Designs How Sociologists Gather their Data.
Instructional Plan | Slide 1 AET/515 Instructional Plan December 17, 2012 Kevin Houser.
Arbitre Consulting, Inc.
Quality Assessment July 31, 2006 Informing Practice.
Performance and Portfolio Assessment. Performance Assessment An assessment in which the teacher observes and makes a judgement about a student’s demonstration.
September 2007 Survey Development Rita O'Sullivan Evaluation, Assessment, & Policy Connections (EvAP) School of Education, University of North Carolina-Chapel.
 Descriptive Methods ◦ Observation ◦ Survey Research  Experimental Methods ◦ Independent Groups Designs ◦ Repeated Measures Designs ◦ Complex Designs.
CHAPTER 12 Descriptive, Program Evaluation, and Advanced Methods.
1 Learning Objectives: 1.Understand data collection principles and practices. 2.Describe the differences between collecting qualitative and quantitative.
By: Dr. AWATIF ALAM ASSOCIATE PROFESSOR MEDICAL COLLEGE,KSU.
QUESTIONNAIRE DESIGN.
Behavioral and Emotional Rating Scale - 2 Understanding and Sharing BERS-2 Information and Scoring with Parents, Caregivers and Youth May 1, 2012.
Sociological Research Methods
CS211 Slide 3-1 ADCS 21 Systems Analysis Phase Overview Systems Requirements Checklist Fact-Finding techniques Documentation (Chapter 3) SYSTEMS ANALYSIS.
Barriers to Independence Among TANF Recipients: Comparing Caseworker Records & Client Surveys Correne Saunders Pamela C. Ovwigho Catherine E. Born Paper.
(c) 2007 McGraw-Hill Higher Education. All rights reserved. Accountability and Teacher Evaluation Chapter 14.
OBSERVING CHILDREN: A Tool for Assessment
Assessment Ice breaker. Ice breaker. My most favorite part of the course was …. My most favorite part of the course was …. Introduction Introduction How.
Performance Measurement 101. Performance Measurement Performance Measurement is: –The ongoing monitoring and reporting of program accomplishments and.
Monitoring and Evaluation in the GMS Learning Program 7 – 18 May 2012, Mekong Institute, Khon Kaen, Thailand Randy S. Balaoro, CE, MM, PMP Data Collection.
Outcomes Evaluation A good evaluation is …. –Useful to its audience –practical to implement –conducted ethically –technically accurate.
PREPARATION OF QUESTIONNAIRES PREPARATION OF QUESTIONNAIRES Chapter - 4 Dr. BALAMURUGAN MUTHURAMAN
Monitoring and Assessment Presented by: Wedad Al –Blwi Supervised by: Prof. Antar Abdellah.
 Teaching: Chapter 14. Assessments provide feedback about students’ learning as it is occurring and evaluates students’ learning after instruction has.
Chapter 29 Conducting Market Research. Objectives  Explain the steps in designing and conducting market research  Compare primary and secondary data.
Overview of Types of Measures Margaret Kasimatis, PhD VP for Academic Planning & Effectiveness.
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
Focus Questions What is assessment?
In-Service Teacher Training
ASSESSMENT OF STUDENT LEARNING
Presentation transcript:

The Research Consumer Reviews the Measures Chapter 5. The Research Consumer Reviews the Measures

Data Collection and the Research Consumer Evaluation researchers collect information on: Study participants (who they are and how the program effects them) Program implementation and “processes” Effects of interventions and programs (outcomes, impact, and costs) Consumers should ask: Does evidence about program effectiveness come from appropriate and valid measures? Consumers should understand the characteristics of the data collected (e.g., “Yes I read magazines” versus “I read 4 magazines on average each month”). Evaluation information comes from surveys, observations, tests, physical exams, and reviews of printed, oral, and electronic reports and databases.

Surveys Self-administered surveys ask participants to answer questions or respond to items in writing (“paper-and-pencil”) or online. ADVANTAGE Many people are accustomed to completing surveys regardless of where or how they are administered. Familiarity with a data collection process saves staff time.

Surveys (Continued) DISADVANTAGES If asked to recall events (e.g., “In the past four weeks, how often did you….?), participants may not remember. The people who respond to surveys may be the ones who feel most strongly (really pleased or really angry). The self-administered survey’s format is not suitable (and not really designed) for obtaining explanations of behavior or sensitive information. Without supervision, some participants may fail to answer some or even all questions because they do not understand them. Some people may have difficulty completing surveys especially if the reading level is too high for them, they are not interested, or they are too busy or too ill.

Multiple-Choice Achievement Tests Achievement tests are the most commonly used method of collecting information on educational accomplishment. Tests are used to measure knowledge, understanding, and application of theories, principles, and facts. Advantages Almost all students are used to multiple-choice achievement tests and have similar expectations of them regardless of their age or other demographic characteristics. Many standardized tests are available enabling researchers to compare findings across students who have differing backgrounds and experience. Tests are the often the most efficient method of measuring the knowledge that is supposed to result from a particular lesson or course of instruction. Valid multiple-choice tests are relatively easy to score and interpret.

Multiple-Choice Achievement Tests (Continued) Disadvantages Multiple-choice tests may not be the most appropriate method to measure “high” levels of knowledge or for assessing attitudes and values. Each test question must be carefully studied to ensure that it measures the concept it is supposed to measure. Many questions do not meet this standard.

Record Reviews Record reviews are analyses of documented behavior. Documentation may be in print, online, on or audio or video Records come in two forms. The first type consists of existing documents (e.g., participants’ medical, school, employment record). The second is developed specifically for a given program (e.g., a food diary that is kept especially for a nutrition program).

Record Reviews (Continued) Advantages Obtaining data from existing records can be relatively unobtrusive in that study participants’ daily activities need not be disturbed. Records are a relatively reliable storehouse of actual practice or behavior. If data are needed on many demographic characteristics (e.g., age, sex, insurance status), records are often the best source. Data obtained from records like medical and school records may reduce participants’ research burden.

Record Reviews (Continued) Disadvantages Finding information in records is often time-consuming. The record review process must be demonstrably reliable and accurate. This may mean extensive training staff and monitoring the quality of the review. Certain types of information are rarely recorded (e.g., functional or mental status; time spent with clients “after hours”). Records do not provide data on the appropriateness of a practice or on the relationship between what was done by the practitioner (the process of care) and results (the outcomes of care).

Observations Observations produce descriptions such as: the size of a classroom or the number, types, and dates of magazines in the office waiting room a portrait of the dynamics of a situation (e.g., a meeting between parents and teachers or children at play) Observations take three basic forms. Participant observation in which the researcher actually becomes part of the community being observed In-person observation Unobtrusive observations (e.g. placing a camera in a room or clinic or yard)

Observations (Continued) Advantages Observations provide an opportunity to collect firsthand information. Observations can provide information that cannot be anticipated because the evaluator is present when the unforeseen occurs. Disadvantages A very structured format and extensive training are required for dependable observations. Observations are labor-intensive and time-consuming. The observer (camera or human) can influence the environment being studied.

Interviews Interviews are conducted in person (with our without the assistance of a computer) and on the telephone and using electronic methods Advantages Interviews allow the researcher to ask about the meaning of questions. Interviews can be useful in collecting information from people who may have difficulty reading or seeing. Disadvantages Time-consuming and labor intensive. Require extensive training and monitoring if they are to elicit accurate information in a timely manner. Special skills may be required to interpret responses that are off the record.

Databases and Data Sets Governments, statisticians, and researchers compile data into databases to keep track of individuals and communities so as to describe and monitor health, education, and need for and use of social services. Researchers typically use large databases and data sets to help program planners understand the extent of need for public programs and monitor progress and outcomes over time.

Databases and Datasets (Continued) Advantages Using existing data can be economical. Often a major cost in research is collecting new data. Disadvantages The selection of data to collect, the choice of persons from whom to collect it, the quality of the data, and how data were recorded are all predetermined.

Vignettes (Hypothetical Scenarios) A vignette is a short scenario that is used in collecting data in “what if” situations. Advantages Vignettes can be fun for participants to complete. Vignettes can be efficient. They enable the researcher to vary important factors (e.g. age, gender) one at a time across vignettes. Not every participant has to review a scenario with every factor as long as all participants review some factors and all factors are reviewed. Disadvantages Producing vignettes requires technical and artistic (writing) skill Researcher cannot be certain that the responses accurately reflect the participant’s true feelings or behavior. Sampling can get complicated.

Response Choices Response choices are variously referred to as rating or measurement scales or levels of measurement. There are three basic measurement levels. Categorical responses. Categorical responses force respondents to put answers into categories such as:. Are you male or female? When only two choices are possible (e.g., male and female), the results are termed dichotomous. Ordinal Responses. Questions using ordinal response options offer responses with a built-in order (e.g., poor, fair, good, very good, excellent) . Numerical Responses: Continuous and Discrete Continuous responses can have an infinite number of values (e.g., weight). Numerical data may be discrete, and ask for an actual number in response (number of calories in one cookie).

Questions Asked by Evaluators when Selecting Measures Which variables are to be measured? Can I borrow or adapt a currently available measure for each variable or must one or more measures be created? If appropriate measures are available, do I have the technical and financial resources to acquire and administer them? If none is available, do I have the technical skills, financial resources, and time to create them? Are participants likely to be able to fill out forms, answer questions, and provide information etc. called for by the measure

Questions Asked by Evaluators when Selecting Measures In studies that involve direct services and use of information from medical, school, prison and other confidential records, can I obtain permission to collect data in an ethical way? To what extent will users of the evaluation’s results (e.g., practitioners, students, patients, program developers, policy-makers and sponsors) have confidence in the measures?