Presentation is loading. Please wait.

Presentation is loading. Please wait.

Reading and Evaluating Research

Similar presentations


Presentation on theme: "Reading and Evaluating Research"— Presentation transcript:

1 Reading and Evaluating Research
Chapter 4 Reading and Evaluating Research

2 Overview When you need the best information, you need to read the research and question that research. Understanding an article Developing ideas from an article

3 Reading for Understanding
Choosing an article Reading the abstract Reading the introduction Reading the method section Reading the results section Reading the discussion

4 Choosing an article Selecting the right article may be the most important step Approaches to finding an interesting article Track down study that you heard about or read about Look through table of contents of current journals looking for articles with interesting titles Search the literature using computerized tools like (subject, author, reference) – interesting title Web of Science ScienceDirect Springer

5 Reading the Abstract The abstract is a summary of the article
Goal, hyphothesis Method Results Discussion If you don’t like the article’s abstract, consider finding another article If the abstract seems promising, scan the article and read the first paragraph of the Discussion section. If you can’t understand that paragraph, consider looking for another article.

6 Reading the Introduction
The best place to start reading an article is at the beginning. Although unlabeled, the beginning of the article is called the introduction. The introduction is the most difficult, most time-consuming, and most important part of the article to understand. You must understand the introduction because it is where the authors tell you 1. how they came up with the hypothesis, including reasons why they think the hypothesis will be supported 2. reasons why the hypothesis might not be correct 3. why the hypothesis is important 4. why the authors’ way of testing the hypothesis is the best way to test the hypothesis

7 Reading the Introduction

8 Reading the Introduction
to look at the introduction is as a preview to the rest of the article. The authors start by giving you an overview of the research area. Then, they go into more detail about what other researchers have found in exploring a specific research topic. the researchers point to some problem with past research. The problem may be a gap in past research, such as a research hypothesis being tested in only a couple of studies—or by none at all. the problem may be a flaw in past research, such as failing control for key variables or obtaining conflicting results. The authors will use the problem as the rationale for their study and then state their research question. the authors may explain why their method for testing the hypothesis is the best way

9 Reading the Introduction
May be difficult and time-consuming, but it tells you what hypotheses they are testing and why: Sets the stage for the rest of the article. Don’t leave the introduction without knowing What the hypothesis is Why they are testing the hypothesis

10 Reading the Introduction
To reiterate, do not simply skim the introduction and then move on to the method section. The first time through the introduction, ask yourself two questions: 1. What concepts do I need to look up? 2. What references do I need to read?

11 Re-reading the Introduction
Then, after doing your background reading, reread the introduction. Do not move on to the method section until you can answer these six questions: 1. What variables are the authors interested in? 2. What is the prediction (hypothesis) involving those variables? (What is being studied?) 3. Why does the prediction (hypothesis) make sense? 4. How do the authors plan to test their prediction? Why does their plan seem to be a reasonable one? 5. Does the study correct a weakness in previous research? If so, what was that weakness? That is, where did others go wrong? 6. Does the study fill a gap in previous research? If so, what was that gap? That is, what did others overlook?

12 Reading the Method Section
Who the participants/materials were (often in Participants/Materials subsection)? How could flow the measurement (Apparatus, What happened (often in Procedure subsection)?

13 Design subsection The design subsection might tell you whether the design was a survey a between subjects design (in which one group of participants is compared against another group), a within-subjects—also called “repeated measures”— design (in which each participant is compared against himself or herself). Instead of a separate design section, authors may put information about the design in the participants section, in some other section, or even leave design information out of the method section entirely.

14 Method subsection even though the introduction probably foreshadowed how the authors planned to test the hypothesis the authors are still going to take you, step-by-step, through the process so that you could repeat their experiment.

15 Method section This “How we did it” section
(a) the authors give you too many details (details that might be useful for redoing the study but aren’t essential for understanding the basics of what the researchers did), (b) the authors avoid giving you too many details by using a shorthand for a procedure (e.g., “we used the same procedure Hannibal & Lector [2005] used”), (c) the authors use some task or piece of equipment that you are familiar/unfamiliar with.

16 Method section After reading the method section, ask the followings
Would you have been engaged in the study? Would you have acted natural? Would you have figured out the hypothesis? Would you have interpreted the situation the way the researchers expected you to? Would you have been able to avoid biasing the results?

17 Method section What were the participants/subjects like (species, gender, age), and how did they come to be in the study? What was the procedure? What were the key variables in this study, and how did the researcher operationally define those variables? For example, how was the dependent variable measured? What was the design (type of study, e.g., survey, simple experiment)? Is it possible to repeat the measurement (apparatus, step of measuring) Statistical analysis!!!!

18 Reading the Results Section
turn to the results section of the article you selected to find out what happened. Just like a sports box score tells you how your team did, the results section tells you how the hypotheses did (whether the hypothesis “won”) and provides an in-depth analysis of what participants did. Know what the scores mean (abbrev) Results of statistical analysis Basic (avg and SD and CV and range) Normality (basic of t-test, ANOVA, MANOVA) Reability (if a new method) „Higher” test (ANOVA, Turkey, non-parametric, cluster) Know whether the hypothesis was supported

19 Reading the Results Section
be able to answer these five questions: 1. What are the scores they are putting into the analysis? 2. What are the average scores for the different groups? Which types of participants score higher? Lower? 3. Do I understand all the tables and figures that contain descriptive statistics, such as tables of means, percentages, correlations, and so on? 4. What type of statistical analysis did the authors use? 5. Do the results appear to support the authors’ hypothesis? Why or why not?

20 Reading the Discussion Section
The discussion section relates the results to the real world, theory, and future research. Section analyzes the results in relationship to the hypothesis, the discussion section interprets the results in light of the bigger picture. Summarizes results relating to the hypothesis Integrates/reconciles results with introduction May suggest future research

21 To do during reading 1. Jot down the main findings. 2. Relate these findings to the introduction. 3. Speculate about the reasons for any surprising results

22 Questions after Discussion
you should be able to answer these five questions: 1. How well do the authors think the results matched their predictions? 2. How do the authors explain any discrepancies between their results and their predictions? 3. Do the authors admit that their study was flawed or limited in any way? If so, how? 4. What additional studies, if any, do the authors recommend? 5. What are the authors’ main conclusions?

23 Developing Research Ideas from Existing Research
Direct replications Systematic replications Conceptual replications Extending research

24 The Direct Replication
An “exact copy” done

25 The Systematic Replication
A slight modification of the original study Done for All the reasons you would do a direct replication To have more power than the original To have more external validity than the original To have more construct validity than the original

26 The Conceptual Replication
A replication that usually uses different measures and/or manipulations than the original Failure to replicate casts doubt on the original study’s construct validity; successful replication boosts confidence in the validity of the original study’s conclusions

27 Extending Research Add moderating variables
Add/manipulate mediating variables Look for the functional relationships Do studies suggested by study’s authors

28 Extending Research

29 Concluding Remarks You now know how to Read research
Criticize/Evaluate research Get research ideas from published research

30 Summary 1. Not all articles are equally easy and interesting to read. Therefore, if you are given an assignment to read any article, you should look at several articles before committing to one. 2. Reading the title and the abstract can help you choose an article that you will want to read. 3. The abstract is a short, one-paragraph summary of the article. In journals, the abstract is the paragraph immediately following the authors’ names and affiliations. 4. In the article’s introduction, the authors tell you what the hypothesis is, why it is important, and justify their method of testing it. 5. To understand the introduction, you may need to refer to theory and previous research.

31 Summary 6. The method section tells you who the participants were, how many participants there were, and how they were treated. 7. In the results section, authors should report any results relating to their hypotheses and any statistically significant results. 8. The discussion section either reiterates the introduction and results sections or tries to reconcile the introduction and results sections. 9. When you critique the introduction, question whether (a) testing the hypothesis is vital, (b) the hypothesis follows logically from theory or past research, and (c) the authors have found the best way to test the hypothesis. 10. When you critique the method section, question the construct validity of the measures and manipulations and ask how easy it would have been for participants to have played along with the hypothesis.

32 Summary 11. When you look at the results section, question any null (nonsignificant) results. The failure to find a significant result may be due to the study failing to have enough power. 12. In the discussion section, question the authors’ interpretation of the results, try to explain results that the authors have failed to explain, find a way to test your explanation, and note any weaknesses that the authors concede. 13. Errosrs may justify doing a direct replication. 14. You can do a systematic replication to improve power, external validity, or construct validity. 15. If minor changes can’t fix problems with a study’s construct validity, you should do a conceptual replication. 16. Replications are vital for the advancement of science. 17. Reading research should stimulate research ideas

33 Reliability and Validity
Chapter 5 Reliability and Validity

34 Overview Measuring Variables Choosing a Behavior to Measure
Overview of Types of Measurement Errors Bias Random error Reliability Validity Manipulating Variables Validity Threats to Establishing Types of manipulations

35 Two Types of Measurement Error
Bias Random error

36 Three “Places” Measurement Error Can Occur
Observer/Scorer Participant Person administering the measure

37 Two Types of Observer Error
Observer bias (Scorer bias) Random observer error

38 Minimizing Observer Errors
Why it is more important to reduce observer bias than random error Techniques for reducing observer bias*

39 Techniques for Reducing Observer Bias
Eliminating human observer errors by eliminating the human observer Limiting human observer errors by limiting the human observer’s role Reducing observer bias by making observers “blind” Conclusions about reducing observer bias

40 Reducing Random Observer Error
Most of the techniques that reduce observer bias reduce random observer error

41 Errors in Administering the Measure
Types Experimenter (researcher) bias Random error Solutions Blind technique to reduce bias Standardization to reduce both bias and random error

42 Errors Due to the Participant
Bias due to the participant (Subject bias) Random error due to the participant

43 Subject (Participant) Bias
Obeying demand characteristics Social desirability bias

44 Conclusions about Reducing Subject Biases
Blind techniques can reduce demand characteristics Making participants anonymous can reduce social desirability bias

45 Summary of Types of Measurement Error
Try to reduce all forms of measurement error Really focus on reducing bias

46 Reliability: The (Relative) Absence of Random Error
The importance of being reliable: Reliability as a prerequisite to validity Using test-retest reliability to assess overall reliability: To what degree is a measure “random error free”?

47 Non-observer sources of random error
Identifying (and Then Dealing with) the Main Source of a Measure’s Reliability Problems Are observers to blame for low test-retest reliability?: Assessing observer reliability Non-observer sources of random error Using internal consistency measures to estimate random error due to participants

48 Internal Consistency: Test Questions Should Agree with Each Other
Random error due to participants may cause low internal consistency

49 Measuring Internal Consistency
Average inter-item correlations as indexes of internal consistency Split-half coefficients as indexes of internal consistency Additional indexes of internal consistency Conclusions about internal consistency’s relationship to reliability

50 Conclusions About Reliability
Reliability is a prerequisite for validity If test-retest reliability is low, try to find out where reliability problem is and fix it. Reliability does not guarantee validity

51 Reliability

52 Beyond Reliability: Establishing Construct Validity
Content Validity Internal Consistency Convergent Validity: Getting evidence that you are measuring the right construct Discriminant Validity: Showing that you are not measuring the wrong construct

53 Step of validity

54 Manipulating Variables
Common threats to a manipulation’s validity Evidence used to argue for a manipulation’s construct validity Tradeoffs among three common types of manipulations Conclusions

55 Common Threats to a Manipulation’s Validity
Random error Experimenter bias Subject biases

56 Evidence Used to Argue for a Manipulation’s Construct Validity
Consistency with theory Manipulation checks

57 Tradeoffs Among Three Common Types of Manipulations
Instructional manipulations Environmental manipulations Manipulations involving stooges

58 Types of Manipulations

59 Concluding Remarks Operational definitions should
Be consistent with dictionary/theory definitions Be standardized to reduce bias and random error Have evidence to support their validity

60 Summary 1. Reliability refers to whether you are getting consistent, stable measurements. Reliable measures are relatively free of random error. 2. One way to measure the extent to which a measure is free of random error is to compute its test–retest reliability. 3. Three major sources of unreliability are random errors in scoring the behavior, random variations in how the measure is administered, and random fluctuations in the participant’s performance. 4. You can assess the degree to which random errors due to the observer are affecting scores by computing an interobserver reliability coefficient. Interobserver reliability puts a ceiling on test–retest reliability.

61 Summary 5. For objective tests, you may get some idea about the degree to which scores are affected by random, moment-to-moment fluctuations in the participant’s behavior by using an index of internal consistency. Popular indexes of internal consistency are Cronbach’s alpha, split-half reliabilities, and average inter-item correlations. 6. Random error is different from bias. Bias is a more serious threat to validity. In a sense, random error dilutes validity, whereas bias poisons validity. 7. Validity of a measure refers to whether you are measuring what you claim you are measuring. 8. Reliability puts a ceiling on validity; therefore, an unreliable measure cannot be valid. However, reliability does not guarantee validity; therefore, a reliable measure may be invalid.

62 Summary 9. A valid measure must (a) have some degree of reliability and (b) be relatively free of both observer and subject biases. 10. Two common subject biases are social desirability (trying to make a good impression) and obeying the study’s demand characteristics (trying to make the researcher look good by producing results that support the hypothesis). 11. By not letting participants know what you are measuring (unobtrusive measurement), you may be able to reduce subject biases 12. Establishing internal consistency, discriminant validity, convergent validity, and content validity are all ways of building the case for a measure’s construct validity.

63 Summary 13. Choosing a manipulation involves many of the same steps as choosing a measure. 14. Placebo treatments and unobtrusive measurement can reduce subject bias. 15. “Blind” procedures and standardization can reduce experimenter bias. 16. You can use manipulation checks to make a case for your manipulation’s validity.


Download ppt "Reading and Evaluating Research"

Similar presentations


Ads by Google