Presentation is loading. Please wait.

Presentation is loading. Please wait.

Take out a blank sheet of paper  Number one side of the page from 1 to 10 from top to bottom. Do NOT put your name on it.  As words appear on the screen,

Similar presentations


Presentation on theme: "Take out a blank sheet of paper  Number one side of the page from 1 to 10 from top to bottom. Do NOT put your name on it.  As words appear on the screen,"— Presentation transcript:

1 Take out a blank sheet of paper  Number one side of the page from 1 to 10 from top to bottom. Do NOT put your name on it.  As words appear on the screen, write down the first thought that comes into your mind in the order they appear from 1 to 10.

2  Time  Death  Love  Mother  Red  Water  Home  Friend  Fear  Balloon

3  Fold the page in half and put it in the box as I come around.  As you read the person’s responses, what are some assumptions you can make about them, write these assumptions on the right side of the page and explain them.

4 Psychology – the scientific study of behaviour and mental processes.  Comes from the Greek – “psyche” – soul and “logos” – to know  It means literally “to know the soul”

5 The goals of psychology:  To describe behaviour by gathering information  Explain why people behave as they do  Predict what humans will think, feel, or do based on acquired knowledge  Influence behaviour in a helpful manner.

6 History is an important part of psychology because all aspects of psychological functioning involve some form of history:  Physiology – genes give you billions of years of stored history of successful mutation  Perception – perception is never instantaneous  Memory – recall of the past  Cognition – thinking, judging, decisions about the future  Learning – past learning guides present behaviour  Social/Cultural – knowledge passed down through institutions, cultures, families.

7  Therefore the history of perceptions creates ambiguity or uncertainty that leads to unpredictable and often harmful effects.  By studying the processes of these types of history, we gain knowledge of our perceptions and a better, more clear picture of reality. With this we can make better decisions.

8 The basis of all knowing is how we learn about the world  The problem is that the world is NOT self- explanatory – the reason for the being and becoming of things, events, and people are not made known to us be simple observation of our universe nor by thinking about them.  Basically, the causes of events/objects/changes are not easy to discover

9 The Poker Paradox  How can one person be good at poker and another, who understands the rules as well, be bad?  Because the good player understands psychology.

10 Imagine this table…  Raymer  Helmuth  Ferguson  You  Lederer  Dealer

11 What are some things you can OBSERVE to help you reach your decision? Actions of your opponents (“tells”): Breathing Twitching Tapping Coughing Sweating Showing emotion

12 What are some things that you CAN’T observe that help you make your decision? The situation of the game – bet/call/raise/check Your own hand! The history of the hand, the history of each player, the history of your last few games What cards are still out there, what odds do you have for an out.

13 Psychology is based on the same set of problems and assumptions as poker.  From the observable information, we attempt to discern the unobservable.  2 main outputs for “players” to study:  Behaviour – overt and public, can be observed and measured with high accuracy  Cognition/Emotion – internal events that can only be self-reported (introspection). Can be unreliable.

14 Psychology uses scientific method to achieve accuracy.  Identify a problem  Develop a hypothesis (and a null)  Gather data  Analyze results  Conclusion

15  The doctor gathered the notes from her observations and added them to the test results she had obtained  The patient walked into the doctor’s office and complained of fever and lack of energy.  The doctor concluded that the patient had the flu.  The doctor thought that the patient might have the flu that was going around.  The doctor prescribed rest, aspirin, and plenty of liquids.  The doctor inspected the patient’s eyes, nose, and throat and ordered some tests.

16 All good scientific research has…  Reliability – the extent to which an experiment yields the same result on repeated trials.  Validity – the extent to which an experiment is accurately targeted to test the hypothesis stated.

17 Empiricism and objectivity  All psychological research ideas need to have two key components to be scientific:  Empiricism – an approach to acquisition of knowledge that places a high emphasis on direct sensory information. There must be empirical evidence for ideas to be scientifically validated. ***Recent technological advances have made unobservable behaviours (e.g. cognition) scientifically quantifiable or observable.  Objectivity – Researchers thoughts about the topic being studies should have no influence on the data gathered or the interpretation of it – unbiased

18 Characteristics of good research:  Operational definitions – the behaviours or aspects being studied must be clearly defined in observable terms that are apparent and measurable.  Eg. It is not good research to say that you are going to observe enjoyment of ice cream in children. “Enjoyment” must be operationally defined as measurable survey results, or observable overt behaviours like smiling.

19 Reliability – can be established by considering the following questions: 1) Did more than one person collect the data and do their data sets agree? This is known as inter-rater reliability. 2) If you use the same method again in the same situation, will you get the same results? Can the study even be repeated? This is replicability. It relies on the author of a study outlining clear operational definitions and methods. If the study yields similar data in similar circumstances, then the study has test-retest reliability.

20 Validity – There are two types of validity: 1) Internal validity – the quality of the research itself, especially in cause-effect research claims. Are you studying what you claim to be studying or measuring what you say you are measuring? This requires clear operational definitions of the dependant and independent variables. Have extraneous variables been adequately managed or have their effects been considered? 2) External validity – is concerned with the appropriateness of generalizing the results to an intended population.

21 Common validity issues:  Internal: 1) What is the researcher trying to manipulate or measure and is it really what they ended up measuring?  Face validity – the research appears to have been conducted well 2) Did the location or nature of the research somehow make the participants act in a certain way?  Demand characteristics – when the participants try to guess the nature of the research being performed and act accordingly  Hawthorne effect – participants perform in a way they think will meet the expectations of a researcher.

22 Internal validity issues cont’d  Screw-you effect – opposite of the Hawthorne effect, participants attempt to generate data that will sabotage the research  Because of these effects, the researcher will often prefer to keep the targeted variables or the nature of the research a secret from the participants 3) Has the researcher maintained objectivity in their interpretation of their results?  If the researcher has specific hopes for the outcome of research it will affect their design, their measurement, and their interpretation to match these expectations – this is known as bias

23 External validity issues  External: 1) Was the research done in an artificial environment or were the tasks performed artificial?  Ecological validity – the more a researcher tries to control the variables in an experiment the more unlike the natural world it will become and therefore the less reliable will be the generalization to the wider target population.  Experimenting and measuring psychological processes becomes difficult as the researcher is positing that a mental process can be indicated by an overt behaviour. This may not always be consistent throughout a population. **Technology is helping in this regard as researchers can now observe brain activity as a participant conducts various tasks but the variety of tasks that can be performed in an MRI bed is very limited.

24 Two examples of issues regarding generalization:  1) Due to ethical constraints, many experiments are conducted on animals rather than humans because it is assumed that animals and humans are fundamentally different in their perceptions of pain and consciousness. If this is the case then generalizing from animals to humans is less reliable.  2) An estimated 75% of all research participants are psychology undergraduates at university. There they provide the most ready and willing pool of people, according to McCray et al., 2005, a third of all research samples are composed of psychology students. Are all people like those that select undergraduate psychology courses in university?

25 Types of Research Methods  Case studies – intrinsic/instrumental  Experiments – lab, quasi, field  Surveys  Interviews  Observation – structured/unstructured, participant, covert/overt

26 Case Study  An observation of an individual, a situation or a group over a period of time  N=1  Instrumental – used to build a theory  Intrinsic – studied for its own merits with no intent to progress to a theory

27 Experiments  To determine how one factor is related to another – cause/effect is established  All other factors controlled – lab experiment  Pre-existing factors manipulated or no true control group – quasi-experiment  Other factors left uncontrolled – field experiment

28 Independent Variable  the factor to be manipulated in an experiment

29 Dependant Variable  The factor of the experiment that will be affected by a change in the independent variable.

30 Other variables  Extraneous variables – any uncontrolled variable in an experiment – not necessarily bad  Confounding variables – extraneous variables that affect the measurement of the dependant variable – very bad

31 Sample Surveys  Using questioning to obtain information about the thoughts or behaviours of a large group of people  Entire population not used – smaller segment used and assumed to reflect the entire whole - sampling

32 Random Sampling  selecting participants directly from the target population at random  This can be difficult because:  The target population is too large to enumerate and randomly select from  Coercion is required to avoid voluntary bias  Therefore – a truly random sample is not possible

33 Sample size  The larger the sample size – the more the survey can account for individual differences that could affect the result

34 Interviews  Dialogue between the researcher and the subject gaining detailed information.  Structured – prepared questions, no follow up allowed  Narrative – one question, subject allowed response freedom  Focus group – more than one subject at one time

35 Observation  Structured – observing and recording subjects with predetermined idea of target behaviours  Covert – subjects are unaware they are being observed  Participant – observations take place while participating in subject behaviours

36 Sampling  For complete generalization accuracy, a study would have to be carried out on the entire human population – this is not possible  In order to make generalization more reliable, sampling must be performed carefully and should be appropriate for the experiment design.

37 Sampling  Target population – the group to which the researcher wishes to generalize their findings.  Often the target population can be narrowed by specifying the culture, gender, age, or other parameter that would define them  Random sampling – selecting participants directly from the target population at random  Sampling bias – when the test population is altered from the target population due to the sampling method used  Eg. Research based on the students at the beginning of a class will exclude those coming late – thus altering the nature of the tested demographic. This population sample may affect the generalizability of the study to the target population.

38 Sampling Techniques  Random Sampling  Opportunity (Convenience) Sampling  Stratified (Quota) Sampling  Cluster Sampling  Purposive Sampling  Snowball Sampling

39 Opportunity (Convenience) Sampling  Using a group of participants not randomly selected from the target population, but invited to participate because they are easily contactable or sympathetic  This can present significant sampling bias dangers – Eg. Undergraduates as volunteer subjects.

40 Stratified (Quota) Sampling  Participants are grouped according to similar characteristics and a proportional number of random selections are made from each sub-group.

41 Cluster Sampling  The target population is broken down into smaller subgroups and only those are tested  Eg. Focus groups – political opinion research

42 Purposive Sampling  Purposeful selection of a sample group based on their perceived beneficial responses  This is the most liable to researcher’s prejudice but it also will yield the most rich data set

43 Snowball Sampling  Participants are asked to invite others they know to participate as well. In this manner, the sample size will grow exponentially

44 Qualitative vs. Quantitative  Data collection methods can be roughly divided into two groups. It is essential to understand the difference between them…  Quantitative – Research conducted with the aim of generating measurable numerical data. It assumes that variables can be identified and the relationships between them measured using statistics, with the aim of inferring cause-effect relationships. The hypothesis is tested using numerical data – i.e. statistical significance  Eg. Experiments; correlational studies; numerical surveys

45 Qualitative  Qualitative – Research conducted with the aim of generating subjective, descriptive data. Rather than testing a theory with a hypothesis, qualitative research tries to construct these theories.  Eg. Interviews, case studies, observations  The emphasis of this research is not on reliability and validity but on the detail and coherence of the design

46 Triangulation  In order to increase the reliability and validity of an experiment. Researchers will use BOTH qualitative and quantitative methods in a single study and compare the results.  The use of combinations of methodologies and approaches to corroborate results is referred to as triangulation

47 Data Triangulation  The use of different data from different sources to corroborate each other over multiple times or multiple sites

48 Researcher Triangulation  Using different people as researchers focussing on the same gathering of data sets. This increases confirmability and credibility. This helps to eliminate researcher bias.

49 Theoretical Triangulation  Using different theoretical approaches to the same hypothesis. This requires the researcher to look at the data analysis from different viewpoints and justify why they consider a particular theory to be a relevant explanation of observed phenomena.

50 Methodological Triangulation  Using multiple methods to gather data on a single topic. This way, qualitative and quantitative data can be used to create a fuller picture of the phenomena under investigation

51 Reflexivity  Refers to the researchers need to constantly be aware of how and why they are conducting the research and identify possible points where their own beliefs or opinions may have influenced data collection or analysis.


Download ppt "Take out a blank sheet of paper  Number one side of the page from 1 to 10 from top to bottom. Do NOT put your name on it.  As words appear on the screen,"

Similar presentations


Ads by Google