Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prentice Hall, © 200919-1 Evaluation of Effectiveness Part 5: Principles: How to Win the Battle of the Buzz Chapter 19.

Similar presentations


Presentation on theme: "Prentice Hall, © 200919-1 Evaluation of Effectiveness Part 5: Principles: How to Win the Battle of the Buzz Chapter 19."— Presentation transcript:

1 Prentice Hall, © 200919-1 Evaluation of Effectiveness Part 5: Principles: How to Win the Battle of the Buzz Chapter 19

2 Prentice Hall, © 200919-2 Questions We’ll Answer How well do you understand why and how advertising evaluation is conducted? Can you list and explain the stages of message evaluation? What are the key areas of media evaluation? How are campaigns and IMC programs evaluated? CHAPTER KEY POINTS

3 Prentice Hall, © 200919-3 Many executive feel advertising is only successful if it produces sales. Others feel advertising should emphasize long-term brand building. If advertising delivers the desired communication effects, but sales don’t increase, was the advertising ineffective? How does impact work? IMPACT: DOES IT WORK?

4 Prentice Hall, © 200919-4 Intuitive analysis is based on an experienced managers judgment. Measurement tracks consumer responses with structured feedback like response cards and calls. Formal evaluation is necessary: –Financial stakes are high—production of :30 spot averages $200,000; national media costs several million. –Advertising optimization—reducing the risk failure through testing, analyzing, tracking performance, and making changes to increase performance. –Identify “best practices”—what works and what doesn’t, so brand advertising continues to improve. Evaluating Effectiveness IMPACT: DOES IT WORK?

5 Prentice Hall, © 200919-5 Testing — to predict results –Sample ads are tested before they run. Monitoring — to track performance –Performance is tracked to see if anything needs to be changed. Measurement — to evaluate the results –The results, or actual effects, are measured after the campaign runs. Types of Evaluation IMPACT: DOES IT WORK?

6 Prentice Hall, © 200919-6 1.Developmental research Pretesting to see if an idea will work, or another is better. 2.Concurrent research Tracking studies and test marketing to see how campaign is unfolding and how messages and media are working. 3.Posttesting research Comparing the impact of campaign after it’s over against a benchmark, baseline, or other starting point. 4.Diagnostic research Taking apart an ad to see what elements are working and which aren’t; examine frame by frame or piece by piece. Stages of Evaluation IMPACT: DOES IT WORK?

7 Prentice Hall, © 200919-7 It’s difficult to measure advertising’s effect on sales: –Other factors affect sales (e.g., pricing, distribution, competition), making it hard to isolate impact. –Effects are delayed; it’s hard to link sales to advertising. Communication effects an be measured as surrogate measures for sales impact. –Awareness of the advertising, purchase intention, preference, liking. Good evaluation plans, as well as effective promotional work, are guided by a model of how people respond to advertising. Facets: Measuring Responses IMPACT: DOES IT WORK?

8 Prentice Hall, © 200919-8 Companies that conduct research and perform diagnostic methods to identify an ad’s strong and weak points: –Ameritest: brand linkage, attention, motivation, communication, flow of attention and emotion through the commercial. –ARS: persuasion, brand/ad recall, communication. –Diagnostic research: brand recall, main idea, attribute statements (importance, uniqueness, believability). –IPSOS-ASI: recall, attention, brand linkage, persuasion, (brand switch, purchase probability), communication. –Mapes and Ross: brand preference change, ad/brand recall, idea communication, key message delivery, like/dislike, believability, comprehension, desire to take action, attribute communication. –Millward Brown: branding, enjoyment, involvement, understanding, ad flow, brand integration, feelings to ad, main stand-out idea, likes/dislikes, impressions, persuasion, new news, believability, relevance –RoperASW: overall reaction, strengths and weaknesses, understanding, clutter-busting, attention, main message, relevance, appeal, persuasiveness, motivate trial, purchase intent. Copy Testing MESSAGE EVALUATION

9 Prentice Hall, © 200919-9 Concept Testing –Compares the effectiveness of various message strategies and their creative ideas (the Big Idea). Pre-testing –Helps marketers make final go/no-go decisions about finished/nearly finished ads using photoboards or animatics. Diagnostics –Designed to diagnose strengths and weaknesses of ideas to improve work still in development or to learn more in order to improve subsequent advertisements. Message Development Research MESSAGE EVALUATION

10 Prentice Hall, © 200919-10 During Execution: Concurrent Testing MESSAGE EVALUATION Coincidental Surveys –In broadcast media, random calls to target market determine stations choices, ads they’ve seen/heard, brand perceptions. Tracking Studies –Every 3 to 6 months, measure top-of-mind brand awareness. –Brand tracking tracks the performance of the brand. Test Markets –Evaluate product variations, campaign or media elements. –Generally two or more markets with markets as controls.

11 Prentice Hall, © 200919-11 Posttesting: After Execution Research MESSAGE EVALUATION Breakthrough: attention—interest, enjoyability, liking Engagement tests—eye-tracking as readers scan ads Memory tests—recognition test, recall tests, unaided recall, aided recall Emotion test—MRI measures brain activity Likeability tests—relevant, important, enjoyable, entertaining, fun Persuasion tests—intention to buy, motivation Inquiry tests—measures number of responses to an ad Scanner research—tally up purchase and collect consumer buying info Single-source research—advertising and brand purchase data come from the same households, linking advertising to sales

12 Prentice Hall, © 200919-12 How did each media vehicle perform? Were reach and frequency objectives met? Services include Simmons-Scarborough, Arbitron, MediaMark. For outdoor, traffic counts don’t equal exposure. For Web or Internet advertising, what is measured and how does it compare to traditional media: hits, click-throughs, minutes spent? Alternative or guerilla marketing is even more difficult to equate to traditional media. Evaluating Audience Exposure MEDIA EVALUATION

13 Prentice Hall, © 200919-13 Advertising ROI and Media Efficiency MEDIA EVALUATION Return on investment (cost to sales ratio) is hard to calculate because many factors affect sales. How do you determine if you’re overadvertising or underadvertising? Wearout — recall stabilizes or declines and irritation increases until there’s no or less response (can be a combo of creative impact and media buying). Media optimization — the goal is optimum media performance getting the most impact for the investment.

14 Prentice Hall, © 200919-14 Last, and perhaps most important, stage in the development of a campaign plan. Determines whether the campaign’s message and media were effective. Measures the overall impact on the brand, but the pieces are still evaluated to determine their individual effectiveness. Why evaluate campaigns? EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS

15 Prentice Hall, © 200919-15 Certain marketing communication functions such as public relations and sales promotion, do some things better than other areas. An integrated plan uses the best tools to accomplish the desired effect. Marcom Tools EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS Principle: Advertising is particularly effective in accomplishing such objectives as creating exposure, awareness, and brand image, and delivering brand reminders.

16 Prentice Hall, © 200919-16 The objective is to generate an immediate behavior response (transaction, buy). Use toll-free numbers, mail-in coupons, Web site or email address, an offer in the copy. Response is easy to measure in terms of effectiveness and ROI. –Total responses divided by total mailed = response per thousand (RPM) Direct Response EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS

17 Prentice Hall, © 200919-17 May be necessary to evaluate both trade and consumer promotions. Payout analysis compares the costs of a promotion to the expected sales. Breakeven analysis — finds the point at which the total cost of the promotion exceeds the total revenues. Sales Promotion EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS

18 Prentice Hall, © 200919-18 Measure the success in getting out the message in terms of output and outcomes –Output: materials produced and distributed; how many press releases ran –Input: acceptance and impact of materials; changes in public opinion Content analysis: Was coverage favorable? Public opinion studies: Have attitudes, behaviors, or knowledge changed? Public Relations EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS

19 Prentice Hall, © 200919-19 Traffic volume –Page views –Site visitors Click-through rates –Ads sold as pay-per click Cost per lead –An attempt to measure ROI using a conversion rate (percent of visitors who complete desired action) Web Site Evaluation EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS

20 Prentice Hall, © 200919-20 Special Advertising Situations EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS Retail advertising B2B advertising International advertising Objective: generate store traffic –Simple counts of people at promotions and events Objective: visibility –Participation counts at events, or “how-to” classes –Sign-up and fill-out forms Objective: loyalty –Participation in frequency clubs or loyalty programs

21 Prentice Hall, © 200919-21 Special Advertising Situations EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS Retail advertising B2B advertising International advertising Objective: generate response/sales leads –Lead count based on calls, emails, and cards returned to the advertiser Objective: conversion rates — number of leads who make a purchase

22 Prentice Hall, © 200919-22 Special Advertising Situations EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS Retail advertising B2B advertising International advertising Difficult to evaluate because of the number of markets, distance, cost and variety of cultures Evaluation should focus initially on pretesting to help correct big problems (due to unfamiliarity with the culture, language or consumer behavior) before they occur

23 Prentice Hall, © 200919-23 It’s difficult to evaluate and estimate the impact of synergy. Brand tracking can measure campaign effectiveness by adding and taking away ingredients, and studying the effects of those changes. The challenge: look at the big picture rather than individual pieces and parts. Advertisers seek an evaluation method that brings all the individual metrics together to efficiently and effectively evaluate and predict communication effectiveness. Campaign Evaluation EVALUATING MARKETING COMMUNICATIONS CAMPAIGNS

24 Prentice Hall, © 200919-24 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without the prior written permission of the publisher. Printed in the United States of America. Copyright © 2009 Pearson Education, Inc. Copyright © 2009 Pearson Education, Inc. Publishing as Prentice Hall


Download ppt "Prentice Hall, © 200919-1 Evaluation of Effectiveness Part 5: Principles: How to Win the Battle of the Buzz Chapter 19."

Similar presentations


Ads by Google