Presentation is loading. Please wait.

Presentation is loading. Please wait.

EVALUATION AND RESEARCH

Similar presentations


Presentation on theme: "EVALUATION AND RESEARCH"— Presentation transcript:

1 EVALUATION AND RESEARCH
© LOUIS COHEN, LAWRENCE MANION AND KEITH MORRISON

2 STRUCTURE OF THE CHAPTER
Similarities between research and evaluation Differences between research and evaluation Connections between evaluation, research, politics and policy making © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

3 DEFINING EVALUATION The provision of information about specified issues upon which judgements are based and from which decisions for action are taken. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

4 COMPARING RESEARCH AND EVALUATION
Origins: Research questions originate from scholars working in a field; evaluation questions issue from stakeholders. Audiences: Evaluations are often commissioned and they become the property of the sponsors and are not for the public domain; research is disseminated widely and publicly. Purposes: Research contributes to knowledge in the field, regardless of its practical application, and provides empirical information, i.e. ‘what is’; evaluation is designed to use that information and those facts to judge the worth, merit, value, efficacy, impact and effectiveness of something, i.e. what is valuable. Research is conducted to gain, expand and extend knowledge; evaluation is conducted to assess performance and to provide feedback. Research is to generate theory; evaluation is to inform policy making. Research is to discover; evaluation is to uncover. Research seeks to predict what will happen; evaluation concerns what has happened or what is happening. Stance: The evaluator is reactive (e.g. to a programme); the researcher is active and proactive. Status: Evaluation is a means to an end; research is an end in itself. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

5 COMPARING RESEARCH AND EVALUATION
Focus: Evaluation is concerned with how well something works; research is concerned with how something works. Outcome focus: Evaluation is concerned with the achievement of intended outcomes; research may not prescribe or know its intended outcomes in advance (science concerns the unknown). Participants: Evaluation focuses almost exclusively on stakeholders; research has no such focus. Scope: Evaluations are concerned with the particular, e.g. a focus only on specific programmes. They seek to ensure internal validity and often have a more limited scope than research. Research often seeks to generalize (external validity) and, indeed, may not include evaluation. Setting of the agenda: The evaluator works within a given brief; the researcher has greater control over what will be researched (though often constrained by funding providers). Evaluators work within a set of ‘givens’, e.g. programme, field, participants, terms of reference and agenda, variables; researchers create and construct the field. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

6 COMPARING RESEARCH AND EVALUATION
Relevance: Relevance to the programme or what is being evaluated is a prime feature of evaluations; relevance for researchers has wider boundaries (e.g. to generalize to a wider community). Research may be prompted by interest rather than relevance. For the evaluator, relevance must take account of timeliness and particularity. Timeframes: Evaluation begins at the start of the project and finishes at its end; research is ongoing and less time-bound (though this may not be the case with funded research). Uses of results: Evaluation is designed to improve; research is designed to demonstrate or prove. Evaluation informs decision making; research provides a basis for drawing conclusions. Evaluations might be used to increase or withhold resources or to change practice; research provides information on which others might or might not act, i.e. it does not prescribe. Decision making: Evaluation is used for micro decision making; research is used for macro decision making. Data sources and types: Evaluation has a wide field of coverage (e.g. costs, benefits, feasibility, justifiability, needs, value for money), so evaluators employ a wider and more eclectic range of evidence from an array of disciplines and sources than researchers. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

7 COMPARING RESEARCH AND EVALUATION
Ownership of data: The evaluator often cedes ownership to the sponsor, upon completion; the researcher holds onto the intellectual property. Politics of the situation: The evaluator may be unable to stand outside the politics of the purposes and uses of, or participants in, an evaluation; the researcher provides information for others to use. Use of theory: Researchers base their studies in social science theory; this is not a necessary component of evaluation. Research is theory-dependent; evaluation is ‘field-dependent’, i.e. not theory-driven but derived from the participants, the project and stakeholders. Researchers create the research findings; evaluators may (or may not) use research findings. Reporting: Evaluators report to stakeholders/commissioners of research; researchers may include these and may also report more widely, e.g. in publications. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

8 COMPARING RESEARCH AND EVALUATION
Standards for judging quality: Judgments of research quality are made by peers; judgements of evaluation are made by stakeholders. For researchers, standards for judging quality include validity, reliability, accuracy, causality, generalizability, rigour; for evaluators, to these are added utility, feasibility, involvement of stakeholders, side-effects, efficacy, fitness for purpose (though, increasingly, utility value and impact are seen as elements for judging research). © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

9 SIMILARITIES BETWEEN EVALUATION AND RESEARCH
Evaluation can examine the effectiveness of a program or policies, as can research. Evaluation and research share the same methodologies (styles, instrumentation, sampling, ethics, reliability, validity, data analysis techniques, reporting and dissemination mechanisms). © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

10 DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Smith, M. & Glass, G
DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Smith, M. & Glass, G. (1987) Research and Evaluation in the Social Sciences. New Jersey: Prentice Hall) The intents and purposes of the investigation The scope of the investigation Values in the investigation The origins of the study The uses of the study The timeliness of the study Criteria for judging the study The agendas of the study © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

11 DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Norris, N
DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Norris, N. (1990) Understanding Educational Evaluation. London: Kogan Page) The motivation of the enquirer The objectives of the research Laws versus description The role of explanation The autonomy of the enquiry © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

12 DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Norris, N
DIFFERENCES BETWEEN EVALUATION AND RESEARCH (Norris, N. (1990) Understanding Educational Evaluation. London: Kogan Page) Properties of the phenomena that are assessed Universality of the phenomena studied Salience of the value question Investigative techniques Criteria for assessing the activity Disciplinary base © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

13 CONFORMATIVE EVALUATION (Stronach, I. and Morris, B
CONFORMATIVE EVALUATION (Stronach, I. and Morris, B. (1994) Polemical notes on educational evaluation in an age of ‘policy hysteria’. Evaluation and Research in Education, 8 (1-2), pp. 5-19) Short term, takes project goals as given. Ignores the evaluation of longer-term outcomes. Gives undue weight to the perceptions of programme participants who are responsible for the successful development and implementation of the programme: ‘over-reports’ change. Neglects/‘under-reports’ the views of some practitioners and critics. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

14 CONFORMATIVE EVALUATION
Adopts an atheoretical approach, and regards the aggregation of opinion as the determination of significance. Involves a tight contractual relationship with programme sponsors that disbars public reporting or encourages self-censorship to protect future funding. Risks implicit advocacy of the programme in its reporting style. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

15 MODELS OF EVALUATION Survey: cross-sectional, longitudinal Experiment
Illuminative The CIPP (Stufflebeam) and the Countenance model (Stake) Context, Input, Process, Product; Antecedents, Transactions, Outcomes; Look for congruence between what was intended to happen and what actually happened in these four areas. Objectives How far have the objectives been achieved. © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

16 STAKE’S MODEL OF EVALUATION
Congruence between intentions & observations – what actually happened INTENTIONS Congruence OBSERVATIONS Intended antecedent         Actual antecedents Intended transactions Actual transactions Intended outcomes Actual outcomes Antecedents = initial conditions Transactions = processes, what takes place during the programme © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

17 RESEARCH, POLITICS AND POLICY MAKING
Politics, research and evaluation are inextricably linked in respect of: Funding Policy-related research Commissioned research Control and release of data and findings Dissemination of research How does research influence policy? Who judges research utilization? Consonance with political agendas © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

18 RESEARCH, POLITICS AND POLICY MAKING
Researchers and policy makers may have conflicting: Interests Agendas Audiences Time scales Terminology Concern for topicality © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors

19 RESEARCH, POLITICS AND POLICY MAKING
Policy makers like: Simple impact model Superficial facts Unequivocal data Short-term solutions Simply, clear remedies for social problems Certainty Positivist methodologies Researchers work with: Complex models Complex data Uncertain findings Longer-term time scales Subtle, provisional data on complex issues Conjecture Diverse methodologies © 2018 Louis Cohen, Lawrence Manion and Keith Morrison; individual chapters, the contributors


Download ppt "EVALUATION AND RESEARCH"

Similar presentations


Ads by Google