Presentation is loading. Please wait.

Presentation is loading. Please wait.

Data Collection and Aggregation

Similar presentations


Presentation on theme: "Data Collection and Aggregation"— Presentation transcript:

1 Data Collection and Aggregation
Presented by AmeriCorps Program Staff and JBS International This presentation is designed to provide you with the basic concepts of Data Collection and Aggregation. You may be aware of some of these but others might be new to you. This is an overview of an important topic and in the future we will have new and more advanced tools. Right now many of you are midyear in your grant and either preparing for midyear progress report or already thinking of next year. We hope that this will help you reflect on what you’ve collected think about next year and make some improvements in your data collection and aggregation systems. This presentation should make you think about how you are shaping questions to get the appropriate data for the focus area and outcome you are addressing. Throughout the presentation we will make references to materials on the Resource Center ( Additional tools are always being added.

2 Learning Outcomes As a result of this session, participants will:
Understand core concepts associated with data collection and aggregation; Better understand how to assess the quality of your current data collection tools and systems; and Identify upgrades you need to make to insure rigorous data collection and meaningful reporting. No matter what focus area you’re working in, what we want is to make everyone comfortable with the core concepts. We are not going into details about specific data collection tools today. Instead this session is designed to give you a more general foundation of the core concepts in data collection and aggregation. In our experience – it is relatively easy to understand the basic “jist” of what each concept means. You hear the words and think, “Yeah, I know what that is.” We want to make sure you have a deeper understanding so you can start asking yourself some key questions to make sure that your data collection systems ask the right questions to get the best quality data possible. This presentation will go through a series of core concepts and the steps needed to put them in action. For some of you, this presentation will be more of a review and offer a validation that your tools and systems are already strong. For others, hopefully we will fill in some knowledge gaps and help you start identifying what aspects of your current data collection need further focus.

3 Agenda What are You Trying to Measure?
Are You Measuring it the Best Way? Measuring Outcomes Data Collection Method Considerations Instrument Considerations We will be covering: Questions to be asking Data collection methods Considerations around instruments

4 Food for Thought Why are you doing the intervention?
What change do you want to create? Can you measure the change/outcome? If you can’t measure the outcome, are you sure you are doing the right thing? So, before we start with the core concepts, here are some questions about your Theory of Change you might want to keep in mind. Presumably, you have already addressed these questions in designing your program and in your application. The answers to these questions provide the foundation of your data collection. We are not going to spend time on these now but be sure to review them later. There are materials on the Resource Center specifically about Theory of Change ( if that is a concept you are not yet comfortable with.

5 Now What? You have identified the desired change (outcome) through your program design and theory of change…. Are you clear about WHAT to measure? AND Are you measuring it the “BEST” way? As most of you are midway through the program year right now, you know what outcome you have committed to measure. Your grant application was approved and your members are actively providing your program’s intervention in the community. This is a perfect time to consider data collection and aggregation in more details. You can make tweaks for this year and changes for next. So there are two questions to ask yourself: Are you clear about WHAT to measure? AND Are you measuring it the “BEST” way? Remember - Best is relative. By “best”, we mean your chosen data collection methodology is appropriate to what you’re trying to measure, it’s rigorous, and works for your program. It depending on situations with your program. What are your resources, what connections, relationship and agreements do you have with your sites/partners to get the data back. There is no one best way, do what works for your program.

6 What are Your Desired Outcomes?
What question are you trying to answer? What change are you expecting? Did you select one of the CNCS priority measures? Did you create your own program-specific outcome? Either way…the data collection and aggregation basics are the same. You fit into one of two categories: You either selected your outcome measure from one of the CNCS priority measures described in the NOFO. And you may be using a sample tool you found on the Resource Center. Or you have created your own program-specific outcome measure and designed your own tools. Whichever applies to your situation, the basics we are going to cover today are the same. Just because you may be using a tool that has been proven to yield great data for some other program similar to yours, that doesn’t always mean it is truly appropriate for use by your program.

7 Driving School Example
Intervention: ABS driving school gives a 10 week course that meets twice a week for 60 minutes that include classroom-based and on road lessons on driving skills Desired Outcome: Students have basic driving proficiency. Here’s an example that we can all relate to and see “Are we really asking the right questions to get to our desired outcome?” Our example features a driving school because it is easier to see the disconnect between what you say you are going to measure and what you actually do measure when it’s a subject that you don’t work in. We all come with neutral eyes and can see how it works. The ABS Driving School has a set course of study including both classroom and on-road lessons. The desired outcome so the student to earn their driver’s license. We need to come up with a way to measure the student’s driving skill level. We will begin with some seemingly straightforward questions to illustrate how the phrasing of a question can yield answers that don’t reflect the outcome we’re looking to measure. This will let us see where the disconnect happens and what can be done to correct that.

8 Real Life Example: Driving Test
Question: Do you like driving? Answer: I LOVE driving!! “Do you like driving?” gets information about an attitude – not skill level. So after the course we ask the student, “Do you like driving?” And the student answers, “I love driving!” That tells us about their enthusiasm, which is great to hear, but it only speaks to student’s attitude and not their skill levelon the road. While student attitudes toward driving may be interesting or useful information to know, but it doesn’t relate to the outcome we are trying to measure and report.

9 Real Life Example: Driving Test
Question: Do you think you are a skilled driver? Answer: I think I am a GREAT driver! “Are you a skilled driver” gets information about self perception, a thought – not actual skill level. Self-ratings are subjective NOT objective. So we change the question and ask the student, “Do you think you are a skilled driver?” The student answers, “I think I am a GREAT driver.” Framing the question this way, we get back a self perception – we don’t know if it’s accurate. It’s a subjective view and doesn’t give us an objective sense of how the student’s skills compares to those of other drivers.

10 Real Life Example: Driving Test
Question: Do you know the state driving laws ? Answer: I got 100% correct on my written driver’s test! Knowing state driving laws reflects knowledge – not actual skill level even though it is objective. Another question gets an objective answer: the student either passed or they didn’t. But it’s measuring their knowledge, not their skill behind the wheel. Can they implement what they know?

11 Real Life Example: Driving Test
Question: Did you pass your road test? Answer: YES! An on-road driving test DOES measure skill level or proficiency and is objective. Since we are trying to measure a skill, we have to ask about a demonstration of that skill. In this case, a road test. This is done by trained facilitator with a checklist who can measure if someone is proficient in driving and is eligible for a driver’s license. What to learn from this example is how you ask the question is really important. Sometimes you ask a question that on the surface may seem like it will get you the desired outcome, but when you tease it apart you can see it is not quite hitting the mark. So that is something you need to think about as you look over the questions in your own data collection instruments: have you asked the right questions to get directly to your measured outcomes. So think about what’s the information that you really want? Are you trying to understand how someone feels, their opinion, a subjective answer? That might be appropriate but if you’re going for something objective that are or based on facts that are measureable. You may want some of both kinds of answers. But then how you ask the questions is different. So in some ways, this relates to what you ask and now we’ll discuss how you’re asking. As in this example, sometimes there are good reasons to measure both knowledge of traffic laws and driving proficiency. It all depends on "What are you trying to measure." If I'm the traffic law instructor, I'm trying to measure whether people learned the laws. If I'm the person in the passenger seat teaching parallel parking, I want to measure proficiency. This leads us to the next slide on validity.

12 What kind of information do you want?
Subjective - includes an element of opinion or personal feeling. Example: how someone feels about driving, confidence Objective - is not dependent upon opinions or personal feelings. It is based on facts that are observable and measurable. Example: knowledge of driving laws and skill driving A common misconception is that if data is quantifiable, it must be objective. However, subjective beliefs, attitudes, etc. can also be quantified, but they are still subjective. For instance, "75% of new drivers reported that they love driving." We've quantified something, but we haven't quantified whether those people actually know how to drive. We shouldn't be tricked into thinking that just because you can add it up or use a number or percentage, it's objective. Sometimes subjective data is useful; it gives part of the picture of a person's health, which can then be further evaluated objectively. Going back to what you are trying to measure and the "ABCs" of Attitude, Behavior or Condition...for CNCS, we want most programs to aim at measuring a change in behavior or condition, but this won't be right for every program. For instance, H9 measures an attitude: "Number of homebound or older adults and individuals with disabilities who reported having increased social ties/perceived social support." In this case, how a person perceives the social support they are receiving is really important to their overall wellbeing and ability maintain independence. Then you have to ask what’s the best way to collect that data. Here you have to figure out a few things: What it is you want to measure? What is the situation for your program (there might be some really great data collection systems but they’re outside your budget so what is the next best thing that will get you really quality information)? Think about what is it going to take to get your sites/partners to agree? What are tools your sites/partners are already using (you don’t always have the ability to come in from outside and recommend something specific)? Your methods need to make sense based on what your situation is. For example, different methods are more appropriate in some situations are different. You wouldn’t use the same tools for a group of kindergarteners as you would adult learners. Appropriate to what but also who. If you wanted to measure drug and alcohol use among middle schoolers, a focus group is not the way to do it. Peer pressure could cause some responders to exaggerate to seem cool and others to downplay because they don’t want others in the group to know. You would want something with more anonymity.

13 How Do You Choose Your Method?
Choice depends on: what you want to measure; and. the situation (i.e. resources for data collection/aggregation, site/partner agreements/restrictions, etc.) Each method is more appropriate in some situations than others (e.g., age, language, content sensitivity, etc.) Will it get you high quality data? We are not going into a lot of specifics about data collection methods. But we wanted to give you some food for thought to go through on your own. First, you need to ask yourself, how did we choose our data collection method? Was it: Based on proven approaches of measuring performance from evidence-based programming; Because we’ve always done it this way; or Because it’s easy and we’re sure to get some data? What you really want to know is can it get you the outcome data you are trying to measure? Or will it give you something else?

14 Does Your Method Measure Your Outcome?
Commonly used data collection methods Surveys Pre/post tests Observations Standardized tests Interviews Focus Groups Diaries, Journals, Self-reported Checklists Available secondary data Here are some commonly used examples of data collection methods with surveys and pre/post-tests being some of the most commonly used. See handout for description and pros/cons for each method. The handout is on the Resource Center:

15 How Do You Choose Your Instrument?
Whichever method you select, what instrument will you use? “Borrow” vs. develop Does it ask the “right” questions to get at your desired outcome? Does it have all the necessary components? What information will each question yield? How will you use information, if not related to outcome? Once you know how you’re going to collect your data, you need to think about what instrument you’re going to use to do that. Here are some things to consider. We’re all good borrowers in the National Service field. You don’t need to recreate the wheel. But don’t assume everything will fit. Take the time to analyze it and see if it’s questions are nuanced enough to get you what you need. Be as rigorous with your own tools and questions. Are they getting you something that is actually going to be useful to you? Sometimes you have questions that go beyond your outcomes. There may be other information you want to know. But be clear which are internal use and which are directly related to your outcomes. Remember you only need to summarize and report out the important things. If asking a bunch of extra stuff, ask yourself if you need to collect it? Some handouts on the Resource Center ( include “Developing PM instruments” and “Instrument formatting checklist”

16 What is “Qualitative” Data?
Describes or characterizes through words Focuses on meaning, experience or attitudes Collected through focus groups, interviews, opened ended questionnaire items, and other less structured situations. Not the same as anecdotal Information Qualitative → Quality Let’s also spend a minute think about qualitative and quantitative data to make sure everyone is clear about the distinctions. This relates to how you’re asking the question and the kind of data you will get back. Qualitative data is a description; quantitative is a number. One is not necessarily better than the other, it depends on the situation. Think about what you want to know and then decide if it is more appropriate to ask. Qualitative data great for getting at peoples’ feelings, perceptions, attitudes. This type of data is collected through asking people questions and allowing them to answer however they see fit. It could be an interview or focus group or an open ended question on a survey. One of the benefits of qualitative data is that you can get a deeper sense of how someone feels about something and they get to express it in their own words – not picking from choices you have pre-defined for them. You will get finer level data but you will get a lot of stuff. Someone could give you a ten minute answer. So you will need systems in place to analyze and aggregate to find themes in a lot of data. Qualitative data is often confused with anecdotal information but they are not the same thing. Anecdotes should be thought of as examples. You pick your best example of how your program has “changed someone’s life.” The anecdote is true – for that person and maybe some others – but does not represent the average change experienced by everyone in the program. Anecdotes can be great descriptive snapshots of what’s possible but that’s not always what’s really going on. If the question you asked to get the anecdote is asked to every person and all the answers are recorded the same way, you can report on all of it and perform a “content analysis” – read each answer to identify common themes. Then you count how many people said something about each theme.

17 What is “Quantitative” data?
Focus on numbers and frequencies. Data which can be measured Length, height, area, volume, weight, speed, time, temperature, humidity, sound levels, cost, ages, scores, etc. Quantitative → Quantity  So let’s switch gears and talk about quantitative data which records ratings, frequencies and more using numbers. It is done with a measurement tool. For example length is measured with a ruler or a tape measure, weight with a scale, temperature a thermometer, and so on. Let’s look at an example that should make the differences a little clearer.

18 Example: How Do You Feel?
Qualitative Data Quantitative Data Have you felt sad or depressed at all lately, or have you generally been in good spirits? Thinking about the past week, how depressed would you say you have been on a scale from 0 to 10, where 0 means "not at all" and 10 means "the most possible I’m not at all depressed. I feel great! I love my new job. I’ve lost 20 pounds and feel healthier than I have in years. I’m going to ask you how you feel. This is a subjective question seeking subjective data. I can ask this in two different ways which will influence what kind of information I will receive. So here is one more example. On a survey, if I asked “Have you felt sad or depressed at all lately, or have you generally been in good spirits?” The response I would get would be qualitative data – words about experience and perception. Answer 1: Well, I’ve been in pretty rough shape lately, to tell you the truth. I mean, I haven’t felt suicidal or anything like that, but I just can’t seem to shake the blues. I just don’t see anything to feel hopeful about in my future. I haven’t really had anybody to talk to about my problems since my husband died last year. Answer 2: I’m not at all depressed. I feel great! I love my new job. And I’ve lost 20 pounds and feel much healthier than I have in years. I can’t remember any period of my life when I’ve been happier. I’ve given the individuals an opening to say what is true for them. Or you can try to get at the same thing but in a different way the answers will be different. To get a quantitative answer I could ask “Thinking about the past week, how depressed would you say you have been on a scale from 0 to 10, where 0 means ‘not at all’ and 10 means ‘the most possible?’” Someone could answer “0” or “9”. Either way it is a subjective question, but it can be asked in a qualitative or quantitative way. You need to decide what level of detail you need at the end of the day.

19 Are You Asking the Question You Mean to Ask?
Does your instrument or data collection method help you measure your desired outcome? Example: If desired outcome = improved academic performance DON’T measure attendance or attitude toward school DO measure improved proficiency in a subject VALIDITY Now we’re going to look at Validity and Reliability which are two terms that are often confused. What makes an instrument or method valid is if it asks questions that you mean to ask to get the data you want to collect. So you need to think about whether your instruments directly measure you desired outcome. If you look at this example, does your tool actually ask questions that relate to academic performance as opposed to attendance or attitude toward school? Or does it tell you about other things that you might want to know without addressing your outcome?

20 Validity Example Desired Outcome (O11): Economically disadvantaged individuals transitioned into safe, healthy, affordable housing Data to collect (from NOFO): An inspection report and certificate of occupancy, proof of residence such as lease or mortgage, or other verification from an external agency that the work was completed and is being occupied might be used. Here’s another example for validity. You have selected this outcome and the NOFO specified what data you need to collect.

21 Validity Example (cont.)
Which questions are most likely to yield the desired information? Do you feel safe in your new home? Can you afford this house? Do you like this house? Do you have a lease or mortgage for this house? Who holds the lease/mortgage? Please share a copy of the signed paperwork. So if you were constructing a tool asking like “Do you feel safe in your new home?” that doesn’t get to the desired outcome the way its been developed. Questions like “Can you afford this house” and “Do you like this house” might be interesting to ask and you might have a reason to ask but those do not give you the data you would need to demonstrate you’re achieving the desired outcomes. The questions about the lease or mortgage are the most valid answers because they directly refer to the desired outcomes.

22 What is a Standardized Test?
A test that: is administered and scored in a consistent or “standard” manner. has been validated externally on a randomly-selected population Need not be “high-stakes” tests, time-limited tests, or multiple choice tests e.g., state administered proficiency tests generally should NOT be used There is a lot of confusion about what a “standardized test” is…so let’s talk about this for a minute. The definition of standardized test is a test that is administered and scored in a consistent or standard manner AND that has been validated externally on a randomly-selected population. Many people when they think of standardized tests think first of high stakes test like state standards tests in various subject areas used to measure student proficiency at various grades. (From ED5) State standardized tests generally should NOT be used as it is expected that they will not be sufficiently tailored to the material taught, may involve long delays before the data became available, and the child‟s classroom teacher would have the primary effect on these scores. There is a document on the resource center which indicates standardized instruments which can be utilized.

23 RELIABILITY Can It Be Repeated?
Does your instrument measure the same thing, the same way every time it is used? Does every person collecting data use the instrument the same way? Have they been trained? Are your instrument instructions clear so respondents have a similar frame to answer? RELIABILITY Reliability is asking things in a way that can be repeated so you get the similar data each time it is administered. To ensure your data is comparable there are a few things to consider: Does your instrument measure the same thing, the same way every time it is used? Does every person collecting data use the instrument the same way? Have they been trained? Are your instrument instructions clear so respondents have a similar frame to answer?

24 Reliability Example Intervention: Preschool children attend early childhood education programs providing school readiness activities in three areas (social/emotional development, literacy and numeracy skills) 4 days a week for 4 hours/day Desired Outcome (ED23): Children demonstrate gains in social and/or emotional development. Here’s an example for reliability. A program is working with preschool children and wants to see gains in social and/or emotional development.

25 Reliability Example (cont.)
Observations: Open-ended Question: Does the child seem well adjusted and ready to attend kindergarten? The way this program has decided to measure these gains is through observation and they need an instrument to compile these answers. Their first thought was to use an open ended question. The problem is the question invites too wide a range of responses and you cannot say that everyone will answer the same way. In fact, you might not answer the same way one day as another. This is not reliable. They instead created this tool, which is not perfect but gets to what they are trying to measure. What they’ve done is identify the core pieces they should be looking for. They identified categories and subcategories with definitions. Definitions and parameters are an important part of training to use a tool like this. Anyone using this form needs to be aware of the difference between “some” and “a little” on the scale and when observing the children. If all your data collectors are operating from the same knowledge you will get reliable data.

26 Instrument Pilot Testing
Test before using - with small number of people similar to those who will respond Pilot test analysis should look at: Were the questions clear enough? Did people understand what you were asking? Do the answers seem appropriate given what you were asking? Make revisions based on results of pilot One thing you can do to ensure your instruments are valid and reliable is a pilot test. Give your new tool to a sample group of people similar to those you expect to complete the real thing. Look at the data that comes back and check if it is what you expected. If it is something very divergent you may need to change your tool. Take the time to pilot test before you send this out to hundreds of people only to find out that you get people answering in different ways. This might seem like a daunting undertaking but you will be glad you did it now while there is time to change.

27 Is Something Systematically Off?
Bias: Problems with WHO you ask sampling bias response rates, etc. Problems with HOW you ask method inappropriate construction of your instrument Another thing to think about is if you are collecting the data in a systematic way or is something off? If it is systematically off you are introducing bias into your results. You can’t trust that your data is accurate if you have bias; the data could be overstating the case or underestimating it. There are many ways bias can be introduced and we’ll highlight the most common of them. Problems with WHO you ask: sampling bias (not asking everyone), response rates (not everyone answers), etc. Problems with HOW you ask a question: method inappropriate (remember the example about middle school drug/alcohol abuse focus groups), construction of your instrument, etc. You want to be as even and inclusive as possible to avoid bias.

28 What is Bias? Measurement bias occurs when information collected for use is inaccurate. Bias may be introduced by poor measurement design or poor data collection. Bias cannot be “controlled for” at the analysis stage. Bias risks readers drawing conclusions that are systematically different from the truth. Bias can lead to an over or underestimation of an effect. Some additional examples: Sampling bias – only measuring outcomes of beneficiaries that show the most improvement Or doing outcome measurement at the one school where you got permission which happens to be the “A Team” at the one school where teachers, administrators, and members work as a great team and use the results to imply similar results occurred at other schools. On the other hand, the one school might also be the one where there are a lot of problems (to illustrate over or underestimation of effect.) Focus group – People might not feel comfortable expressing their true feelings about some things with other people present. Interviews – tone or manner of interviewer cues respondent re: the "right" answer Response bias – only people who return the survey are ones who have strong feelings. This bias could be positive (your biggest fans) or negative (those most disappointed).

29 Revisit: What to Look For?
Are you measuring what counts/matters? Is your measurement approach credible? Are your instruments valid? Are your instruments reliable? Are your measurements precise? These are the most important questions to ask yourself.

30 Instrument Mapping Look at each question on your data collection tool and ask: Does this help us measure the desired outcome? Is there one question? More than one? What kind of data will we get? Subjective? Objective? Quantitative? Qualitative? If it doesn’t measure the outcome, do we really need to ask it? How will we use the answer? Nice to know but won’t use it? Internal use? How will we analyze this? What is our target? How much change is “enough”? We strongly suggest you take what ever tool(s) you are using and do a mapping exercise. A checklist is available at You need to go through your tools systematically and ask if you are on the right path. Most people don’t think from the beginning how they will be analyzing the data. Consider what the end product will be as you begin this process. Be careful to avoid the kitchen sink approach of asking every question you can think of. Some will be useful and some won’t. Avoid making your responders answer questions you won’t be using so they can concentrate on the questions you need. Now we’ll look at a few instrument examples, again not perfect examples, but worth looking at.

31 Target Example: Pre/Post test
This pre and post test shows five categories that are being assessed over time. When you aggregate you can see positive or negative movement. What you’ll need to consider in this case is as you aggregate the data, how many categories must a child improve in to satisfy your outcome? “Mentored children will enhance developmental assets in the areas of social competence and positive identity.”

32 Do You Have a Summary Sheet w/Target?
A summary sheet with a target will reveal how many categories on the pre/post test must be improved to count as a success. In this example it is three categories so Jim Smith did not count as a success but Ana Ramirez did. It is only these successful individuals that count towards your outcome.

33 After You Get the Data, then What?
Do you have a plan for who and how the data collected will be aggregated and summarized? Given the types of questions on your tool(s), is this realistic? Which data DIRECTLY relate to the desired outcome? NOTE: Just because you asked it, doesn’t mean it helps you report on your outcome…don’t confuse people! Have a plan beforehand of how to summarize the data after you have collected it.

34 Here is another example of what you will need to do with your tool
Here is another example of what you will need to do with your tool. You need to identify what type of aggregation approach you will take with each kind of question. You can see that Questions #1-3 are a Likert scale and will need a mean (average) as will #5. Question #4 is open ended and will need content analysis. Questions #6-7 will require you to calculate a frequency and then a percentage. You should plan ahead of time so you know how much time this will all take so you are prepared when the instrument is returned. Examples are on the

35 In addition to the tools we discussed in this presentation, there is a lot of other information on the Resource Center: Hopefully you have gained a deeper understanding of the core concepts that underlie your Data Collection and Aggregation no matter how you are doing it. You should have some questions to ask of yourself and your tools to make sure you get the data that you need to measure your outcomes for this and next year. Thank you.


Download ppt "Data Collection and Aggregation"

Similar presentations


Ads by Google