Download presentation
Presentation is loading. Please wait.
Published byHerbert O’Neal’ Modified over 9 years ago
1
GSSR Research Methodology and Methods of Social Inquiry socialinquiry.wordpress.com January 17, 2012 I. Mixed Methods Research II. Evaluation Research
2
MULITPLE-METHODS APPROACH Triangulation: applying 2 or more dissimilar measures and/or methods (research strategies) to investigating a certain problem. Why do it: increased confidence in findings
3
Key: -what you want to study (i.e. the nature of the research question, the phenomena considered) should determine your research strategy/methods! - the relative strengths & weaknesses of alternative approaches should be weighted in deciding which methods to select, and how best to combine them when possible. Table 12.2 in Singleton and Straights, p. 399
4
Multiple methods can also be used within a single approach: - allows exploiting the strengths & weaknesses of complementary methods. Ex: One approach (survey method), but mail questionnaire to probability sample, & face-to-face interviews on smaller sample of non-respondents, to estimate non-response bias. Vignette experimental designs in survey research; Use of archival records to identify groups for field research …
5
II. EVALUATION RESEARCH www.socialresearchmethods.net/kb/intreval.php http://ec.europa.eu/regional_policy/sources/docgener/evaluation/evalsed/sourc ebooks/method_techniques/index_en.htm Application of social research methods for: (a) assessing social intervention programs & policies instituted to solve social problems; (b) in the private sector: assess policy, personnel, products. Major goal of evaluation: Influence decision-making/policy formulation through providing empirically-driven feedback.
6
Evaluation takes place within a political & organizational context, where researchers face multiple stakeholders. Stakeholders: - individuals/ groups/ or organizations that have a significant interest in how well a program/product functions/performs. Ex: Program sponsor (actor who initiates & funds the program/product) Evaluation sponsor (who mandates & funds the evaluation) Policymaker/decision maker who determines the fate of the program/product, …
7
Outcome of evaluation: Detailed technical report that describes the research design, methods and results. Plus: executive summaries, memos, oral reports geared to the needs of specific stakeholders.
8
Evaluation Strategies Scientific-experimental models (see socialresearchmethods.net/kb/intreval.php) Take values & methods from the social sciences; - prioritize on the desirability of impartiality, accuracy, objectivity & the validity of the information generated. Ex: - experimental & quasi-experimental designs; - objectives-based research that comes from education; - econometrically-oriented perspectives including cost- effectiveness and cost-benefit analysis; - theory-driven evaluation.
9
Management-oriented systems models -emphasize comprehensiveness in evaluation, placing evaluation within a larger framework of organizational activities. The Program Evaluation and Review Technique (PERT) The Critical Path Method (CPM). The Logical Framework -- "Logframe" model developed at U.S. Agency for International Development Units Treatments Observing Observations Settings (UTOS); Context Input Process Product (CIPP)
10
Qualitative/anthropological models Emphasize: -the importance of observation; -the need to retain the phenomenological quality of the evaluation context -the value of subjective human interpretation in the evaluation process. Ex: naturalistic or 'Fourth Generation' evaluation; the various qualitative schools; critical theory & art criticism approaches; and, the 'grounded theory' approach of Glaser and Strauss among others.
11
Participant-oriented models Emphasize the central importance of the evaluation participants, especially clients & users of the program or technology. Ex: Client-centered and stakeholder approaches; consumer- oriented evaluation systems.
12
Types of Evaluation Formative Evaluation (Product): Needs assessment: who needs the program? How great the is the need? & What might work to meet the need? Evaluability assessment: is an evaluation feasible & how can stakeholders help shape its usefulness? Structured conceptualization: helps stakeholders define the program/ technology, the target population, & the possible outcomes Implementation evaluation: monitors the fidelity of the program or technology delivery Process evaluation investigates the process of delivering the program or technology, including alternative delivery procedures
13
Summative evaluation (Effects/Outcome): Outcome evaluations: did the program/technology produce demonstrable effects on specifically defined target outcomes? (effect assessment) Impact evaluation: broader; assesses overall/ net effects -- intended or unintended -- of the program/ technology as a whole Cost-effectiveness & cost-benefit analysis address questions of efficiency by standardizing outcomes in terms of their dollar costs & values Secondary analysis: reexamines existing data to address new questions or use methods not previously employed Meta-analysis integrates the outcome estimates from multiple studies to arrive at an overall/ summary judgement on an evaluation question
14
Methodological Issues in Evaluation Research Effect Assessment: did the program/technology caused demonstrable effects? The ‘black box’ paradigm We can observe: - what goes into the ‘black box’ – the inputs (here, the program/product/intervention) and -what comes out of the box – the output (certain effects). Theory as guide to research
15
Research Design & Internal Validity Ideal strategy for effect assessment: experiment, with units of analysis randomly assigned to at least 2 conditions (one with intervention present, one without).
16
Measurement Validity -need good conceptualization -reliable and valid measures of cause (treatment program) and effect (expected outcome). Issues with creating valid indicators of program outcomes. Timing of outcome measurement: -lagged effect of the program; continuous/gradual effects, vs. instantaneous effects. To increase measurement validity: multiple measurement (independent measures) & different points in time.
17
External Validity -Random sample, or ‘true’ experiments are most often not feasible non-probability sample Selection biases: - self-selection into the treatment group; -selection of program participants because they are likely to generate positive results / are available; Social context of evaluation may threaten external validity
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.