A Multi-disciplinary Perspective on Decision-making and Creativity:

Slides:



Advertisements
Similar presentations
Andrea M. Landis, PhD, RN UW LEAH
Advertisements

Chapter 7 Flashcards. overall plan that describes all of the elements of a research or evaluation study, and ideally the plan allows the researcher or.
Standardized Scales.
Evaluation Procedures
Quantitative vs. Qualitative Research Method Issues Marian Ford Erin Gonzales November 2, 2010.
Who are the participants? Creating a Quality Sample 47:269: Research Methods I Dr. Leonard March 22, 2010.
Reliability and Validity in Experimental Research ♣
The Methods of Social Psychology
Psych 231: Research Methods in Psychology
Validity Lecture Overview Overview of the concept Different types of validity Threats to validity and strategies for handling them Examples of validity.
Experimental Research Take some action and observe its effects Take some action and observe its effects Extension of natural science to social science.
Chapter 8 Experimental Research
V ALIDITY IN Q UALITATIVE R ESEARCH. V ALIDITY How accurate are the conclusions you make based on your data analysis? A matter of degree Non-reification.
1 Inline Citations and Source References: Wayne Smith, Ph.D. Department of Management CSU Northridge Crafting an Efficacious Set of Citations with Necessary.
8. Observation Jin-Wan Seo, Professor Dept. of Public Administration, University of Incheon.
Research Design ED 592A Fall Research Concepts 1. Quantitative vs. Qualitative & Mixed Methods 2. Sampling 3. Instrumentation 4. Validity and Reliability.
Experimental Research Methods in Language Learning Chapter 5 Validity in Experimental Research.
Chapter Eight: Quantitative Methods
1 Inline Citations and Source References: Wayne Smith, Ph.D. Department of Management CSU Northridge Crafting an Efficacious Argument with Necessary Rigor.
CJ490: Research Methods in Criminal Justice UNIT #4 SEMINAR Professor Jeffrey Hauck.
Copyright © 2009 Pearson Education, Inc. Chapter 13 Experiments and Observational Studies.
Understanding Populations & Samples
WHAT IS THE NATURE OF SCIENCE?
Reading – Approaching the Questions
CHOOSING A RESEARCH DESIGN
Issues in Evaluating Educational Research
Principles of Quantitative Research
Experimental Research
Experiments Why would a double-blind experiment be used?
THE FIELD OF SOCIAL PSYCHOLOGY
Hypothesis Testing, Validity, and Threats to Validity
Teaching and Educational Psychology
The Nature of Qualitative Research
A Multi-disciplinary Perspective on Decision-making and Creativity:
Research Methods in Psychology
Chapter Three Research Design.
Rigor is disciplinary…
Primary Data Collection: Experimentation
Chapter Eight: Quantitative Methods
© 2012 The McGraw-Hill Companies, Inc.
Introduction to Design
The Experiment Chapter 7.
A Multi-disciplinary Perspective on Decision-making and Creativity:
Lesson 1 Foundations of measurement in Psychology
Qualitative vs. Quantitative research
Experiments and Quasi-Experiments
Chapter 9 Experimental Research: An Overview
Ten things about Experimental Design
Perspectives on Methodology: Positivism vs. Interpretivism:
Quantitative vs. Qualitative Research Method Issues
Experimental Design.
Experimental Design.
In-Text Citations and Source References:
Samuel O. Ortiz, Ph.D. Professor St. John’s University
Experiments and Quasi-Experiments
Chapter 1: Introduction to Research on Physical Activity
Experiments: Validity, Reliability and Other Design Considerations
Introduction to Experimental Design
Experiments II: Validity and Design Considerations
Experiments: Part 2.
A Multi-disciplinary Perspective on Decision-making and Creativity:
Group Experimental Design
Measurement Concepts and scale evaluation
Primary Data Collection: Experimentation
Research Methods & Statistics
A Multi-disciplinary Perspective on Decision-making and Creativity:
A Multi-disciplinary Perspective on Decision-making and Creativity:
A Multi-disciplinary Perspective on Decision-making and Creativity:
Misc Internal Validity Scenarios External Validity Construct Validity
Chapter Ten: Designing, Conducting, Analyzing, and Interpreting Experiments with Two Groups The Psychologist as Detective, 4e by Smith/Davis.
Presentation transcript:

A Multi-disciplinary Perspective on Decision-making and Creativity: Using the Diversity of Truth-seeking and Sense-making to Advantage in Organizational Contexts Wayne Smith, Ph.D. Department of Management CSU Northridge Updated: Thursday, April 04, 2019

7 – Research/Methodological

Research/Methodological Use of Secondary Data is convenient, but potentially less useful Primary Data collection is definitely less convenient, but potentially very useful

Research/Methodological Survey quantitative vs. qualitative Evaluate numbers or examine texts multiple choice vs. “open ended” face-to-face vs. non-face-to-face Direct Observation Passive Field Work “Experimentation” is almost always out of the question

Research/Methodological Perspectives Positivist Interpretivist Critical Learning to see causal relations Learning to see what isn’t in the data

Differences between Theory & Ideology Conditional, negotiated understanding Offers absolute certainty Incomplete; recognizes uncertainty Has “all the answers” Growing, open, unfolding, expanding Fixed, closed, finished Welcomes tests, positive and negative evidence Avoids tests and findings Changes based on evidence Blind to opposing evidence Detached, disconnected, moral stand Locked into specific moral beliefs Neutral; considers all sides Highly partial Strongly seeks logical consistency, congruity Has contradictions and inconsistencies Transcends/crosses social positions Rooted in specific position

How Our Research Models Work In English, Predicted value = estimated value + error In Math, Ῡ = x + E E = error—1) systematic error and 2) bias Use statistical techniques for 1) Use judgment for 2) Let’s leave, in general, the characterization of the systematic error (random chance) to professionals. As managers, let’s try to characterize the unsystematic error (bias), so the professionals (and other managers) don’t miss them.

Threats to Internal Validity (especially surveys) Selection Bias (by far, the biggest issue in practical life) …is the threat that research participants will not form equivalent groups. History Bias …is the threat that an event unrelated to the treatment will occur during the experiment and influence the dependent variable. Maturation Bias …is the threat that some biological, psychological, or emotional process within the subjects and separate from the treatment will change over time. Testing Bias Sometimes, a pre-test (pilot) influences the actual (real) test. Instrumentation Bias …occurs when the instrument changes during the administration of the instrument.

Threats to Internal Validity (especially surveys) Mortality …arises when some subjects do not continue throughout the experiment. Diffusion of Treatment (Contamination) …the threat the research participants in different groups will communicate with each other and learn about the other’s treatment. Compensatory Behavior …when some experiments provide something of value to one group, but not to another. Experimenter Expectancy …occurs when subtle, inadvertent changes by the interviewer alter the response by one or more respondents. One last thing—Threats to External Validity does the survey instrument actually reflect reality (and all of it)? Are we absolutely sure that the survey doesn’t change the reality?

As a manager, what do we want to do with the research findings? That is, what inferences do we want to make from research? Let’s say we were giving a test to a job applicant… Scoring, in which scoring rules are used to generate scores that are most appropriate for performances on each task in the test. Generalization extends the interpretation from observed scores across all tasks on a particular test to the domain score; the score expected if the examinee were administered the entire universe of potential tasks allowed by the testing procedure. Extrapolation extends the interpretation to a level of examinee ability in the domain of interest, with implications for expected future performance on domain-relevant tasks that might be beyond the scope of tasks allowed by the testing procedure. Decision uses estimates of examinee ability to make decisions about examinees, such as whether the examinee is competent to practice a profession, would be appropriate for a particular academic institution, or would benefit from a certain academic course.

Sources Neuman, W. L., (2003), Social Research Methods: Quantitative and Quantitative Approaches, Pearson Education