Presentation on theme: "CAST Project funded by the European Commission, Directorate-General Energy & Transport, under the 6th RTD Framework Programme CAST Plenary Meeting 30 September."— Presentation transcript:
CAST Project funded by the European Commission, Directorate-General Energy & Transport, under the 6th RTD Framework Programme CAST Plenary Meeting 30 September Oslo
30 September, OsloPlenary Meeting CAST 1.OBJECTIVES WP2 Main research question: How to effectively measure and report the effects of campaigns? Developing two practical tools: –Evaluating tool –Reporting tool: guidelines for reporting the campaign results in a standardised way
30 September, OsloPlenary Meeting CAST 3.D2.1: TYPOLOGY OF EVALUATION METHODS Investigation on how road safety campaigns in Europe and beyond have been implemented and evaluated OBJECTIVE Typology of road safety campaigns with respect to theme, target group, objectives, media plan etc. Inventory of evaluation methodologies with respect to research design, measurement variables, data collection methods and techniques RESULT Current state-of-the-art regarding RSC and their evaluation Strengths and weaknesses of current evaluation reports Identification of relevant attributes of RSC that have significant implications for evaluation
30 September, OsloPlenary Meeting CAST D2.1: TYPOLOGY OF EVALUATION METHODS RESULT Current state-of-the-art of RSC and evaluation –national campaigns lasting up to one month and being part of the long-term strategy –general themes as speeding, seat belt use and intoxicated driving –frequently media channels: television, radio advertising, billboards, free press and internet –message appeal described as informative, emotional and confronting –integrated campaigns –EVALUATION single-group evaluation designs either with one or multiple measurements self-reported or observational data – frequently measuring both reach/recall and effectiveness of campaign
30 September, OsloPlenary Meeting CAST D2.1: TYPOLOGY OF EVALUATION METHODS RESULT Strengths and weaknesses of current evaluation reports STRENGTHS –Theme, approach, used media channels, accompanying activities are mentioned –Detailed description of the target group –Most campaigns are evaluated in terms of both reach/recall (reception of the campaign) and the effectiveness (measurement) variables
30 September, OsloPlenary Meeting CAST D2.1: TYPOLOGY OF EVALUATION METHODS RESULT WEAKNESSES –exact running period / duration of the campaign is missing –objectives are not clearly defined –objectives are not always clearly separated between the main groups such as knowledge, awareness, attitude, behaviour, and accident rate. –media production costs and evaluation costs are hardly known –information about the accompanying activities in case of an integrated campaign is often missing –measurement variables are often not consistent with the pre-set objectives –cost-benefit and/or cost-effectiveness analyses are seldom implemented
30 September, OsloPlenary Meeting CAST D2.1: TYPOLOGY OF EVALUATION METHODS RESULT Identification of attributes with significant implications for evaluation -Scope -Target group -Objectives -Accompanying activities -A-priori knowledge Each relevant attribute has its particular implication for the evaluation Combination of implications will determine the appropriate evaluation methodology for a campaign Deliverable 2.3
30 September, OsloPlenary Meeting CAST 3.D2.2: COMPARISON OF RESEARCH DESIGNS Listing ALL possible measurement variables, research designs, data collection methods and techniques that are at least theoretically possible to use for a summative evaluation AIM Comparison with regard their merits and weaknesses to measure the effect of a media or integrated RSC RESULT Introduction of the appropriate evaluation methodology for different campaign types
30 September, OsloPlenary Meeting CAST D2.2: COMPARISON OF RESEARCH DESIGNS classification evaluation types: 1.Formative evaluation – before implementation 2.Summative evaluation – effect (goals reached?) and reach/recall 3.Economic evaluation – CBA and CEA
30 September, OsloPlenary Meeting CAST D2.2: COMPARISON OF RESEARCH DESIGNS Evaluation methodology Measurement variable(s) Research design Data collection method(s) Data collection techniques every component is presented and discussed in detail
30 September, OsloPlenary Meeting CAST D2.2: COMPARISON OF RESEARCH DESIGNS Measurement variables Assessment of reach and effectiveness Three groups of measurement variables: -Self-reported measures -Reach, recognition, recall, likeability, comprehension -Social cognitive variables and behaviour -Observed behaviour -Changes in accident statistics ! Important to assess the right variable in relation to the pre-set objectives of the campaign
30 September, OsloPlenary Meeting CAST D2.2: COMPARISON OF RESEARCH DESIGNS Research design Experimental Quasi-experimental -Aim -Inputs/ouputs -Threats to validity -Applicability for evaluation Selection of a design -Certain degree between rigor and applicability -costs -other practical issues
30 September, OsloPlenary Meeting CAST D2.2: COMPARISON OF RESEARCH DESIGNS Data collection methods Method of asking -questionnaires -interviews -key informants -focus groups Method of observing -on-site Method of document analysis -statistics Advantages and disadvantages complementary
30 September, OsloPlenary Meeting CAST D2.2: COMPARISON OF RESEARCH DESIGNS CONCLUSION Each RSC is different appropriate evaluation methodology All evaluation elements are related: Data collection method defines DC techniques and measurement variables Measurement variables depend also on the aim/objective of campaign Isolated effect of an integrated campaign Choose a proper design Comparison between different phase and elements of the campaign Before – after Multiple intervention groups/periods In practice?
30 September, OsloPlenary Meeting CAST D2.3: EVALUATION TOOL AIM -a practical tool to help researchers/practitioner evaluate a single campaign -a best practice manual depending on the characteristics of the campaign RESULT Finalisation and adaptations according review comments workshops and WP4 will be discussed during technical WP2 meeting in Oslo (1-2 October 2008)
30 September, OsloPlenary Meeting CAST D2.3: EVALUATION TOOL Important attributes Scope –scale on which to measure effectiveness –Implications for e.g. research design Objectives –Clearly defined objectives can be seen as criteria for campaign’s success or failure –Implications for data collection techniques / measurement variables Target group –Clear specification –Implication for choice research design Accompanying activities –Issues about how to isolate the effects of the media campaign A-priori knowledge –Can be used as a before measurement
30 September, OsloPlenary Meeting CAST D2.3: EVALUATION TOOL Theme –Influence evaluation questions but objectives are far more determinative Message appeal –Important factor to clarify success/failure, but not crucial from the evaluation Media coverage –Choice depends on the objectives and the target group –Influences the evaluation questions and choice of measurement variables
30 September, OsloPlenary Meeting CAST D2.3: EVALUATION TOOL CAST workshops 2008 – sessions evaluation tool General remarks regarding campaign evaluation Expectation of an evaluation tool Possibility and/or restrictions in implementing the CAST evaluation methodology Opposed propositions regarding: General guidelines or specific standardised questionnaire What to measure? Feasibility and necessity to isolate the effects of an integrated campaign Minimum criteria or best practice?
30 September, OsloPlenary Meeting CAST D2.3: EVALUATION TOOL Comments of the CAST workshops 2008: Tool should contain a minimum tool – minimum standards for evaluation Two tools: one to define campaign objectives and one for evaluation itself Simple to use and cost-efficient How can you learn from the evaluation results? Checklist! Message: evaluations are important, reasons to convince people… Marketing of the tool?
30 September, OsloPlenary Meeting CAST D2.3: EVALUATION TOOL Ready to use questions with specific examples Necessity of a profound situational analysis to identify the relevant concepts A compulsory and an optional list of measurement variables Clear statements regarding feasibility or not of measuring the isolated effect of the media campaign itself! What is the effect of enforcement alone? (PEPPER?) Several recommendations for several types of campaigns (one best practice is not possible) with a lot of examples!
30 September, OsloPlenary Meeting CAST D2.3: EVALUATION TOOL Comments from WP4? Report on Wednesday 22 October 2008
30 September, OsloPlenary Meeting CAST D2.4: Reporting tool AIM Guidelines for fieldworkers and researchers for reporting effects of a single campaign in a standardised way A template to write down the working method and the results of evaluation RESULTS Update document December 2007 Structure will be linked to D2.3 Finalisation and adaptations according review WP4 will be discussed during technical WP2 meeting in Oslo (1- 2 October 2008)
30 September, OsloPlenary Meeting CAST 3. Conclusion Comments workshops: OK Comments WP4: 22 October 2008 Half December: deadline – review A lot of planning and work to do!!