Presentation is loading. Please wait.

Presentation is loading. Please wait.

SURVEYS VERSUS INSTRUMENTS

Similar presentations


Presentation on theme: "SURVEYS VERSUS INSTRUMENTS"— Presentation transcript:

1 SURVEYS VERSUS INSTRUMENTS
Damon Burton University of Idaho

2 WHAT IS A SURVEY? Surveys are questionnaires that can be used inductively and/or deductively to answer a particular research question. They are typically only used once to answer a practical research question. Surveys may be conducted to (a) get community preferences on capital improvement projects, (b) to identify satisfaction with recreation programming, or (c) to solicit input on possible curricular changes.

3 WHAT IS AN INSTRUMENT? Instruments are standardized questionnaires with demonstrated reliability and validity that are developed as deductive research tools. Instruments measure specific latent constructs (e.g., motivation, perfectionism, leadership) to examine multiple types of specific research questions and used in many types of studies investigating that construct. Instruments may be utilized to (a) assess students’ confidence in math, (b) examine how perfectionism impacts problem-solving, or (c) investigate how different teaching styles influence learning.

4 DEVELOPMENT OF SURVEYS
Surveys are typically developed to answer specific practical or applied questions. Questions are worded based on content experts opinions of “face validity.” Data from a large sample of respondents is typically not necessary to finalize the survey’s item pool. Pilot testing is typically used to identify administration or wording problems. Reliability is limited to only ‘test-retest.’ Validity is limited to ‘face validity.’

5 INSTRUMENT DEVELOPMENT
Instruments are typically developed to be research tools that will be utilized in many studies where a specific construct (i.e., mindsets) needs to be measured reliably and validly. Instruments must document evidence of reliability and validity to be useful. Internal consistency reliability must be demonstrated along with test-retest. Questions are worded based on content experts opinions of face validity. Additional research must demonstrate solid support for factorial and construct validity.

6 INSTRUMENT DEVELOPMENT
Respondents answers are typically used to select and refine items and finalize the item pool. 40-item instruments often start out with a 100+ item pool. Instruments typically require 2-4 data collections to refine items to final form and accumulate initial psychometric evidence.

7 INSTRUMENT DEVELOPMENT
Each round the item pool is subjected to Exploratory Factor Analysis (EFA) which groups together items that are responded to similarly. EFA examines whether the inventory is unidimensional or has underlying dimensions or characteristics. Items that don’t factor, have low reliability, or poor item-to-subscale correlations are rewritten or eliminated. Eventually the final item pool must empirically confirm the instrument’s conceptual model for which items group together into subscales.

8 CONSTRUCTION STRATEGIES
Surveys Instruments Quicker & easier to develop but longer to complete, Few hypotheses, more practical, & inductive Simpler development focused on ‘face validity,’ Diverse topics, Single stage process, Content validity major concern (e.g., face validity). Shorter in length & more conceptually-focused, More hypothesis-focused, conceptually-driven & deductive Complex multi-stage development with demonstrated validity, Topics more focused, Multi-stage process to develop and validate Factorial, concurrent, & construct validity ensure measuring what you intend to measure.

9 INSTRUMENT DEVELOPMENT GUIDELINES
STEP 1 – Determine what you want to measure (e.g., “motivational styles”). STEP 2 – Generate an item pool (e.g., started with 106 items measuring 4 styles based on 7 hypothesized characteristics). STEP 3 -- Determine the format for measurement (e.g., 6-point Likert scale) and labels. STEP 4 – Have experts review item pool and rate on (a) face validity, (b) grammar, © sentence structure & (d) readability.

10 SCALE DEVELOPMENT GUIDELINES
STEP 5 – Inclusion of social desirability scale for controversial topics (i.e., item demand characteristics should be kept small as possible). STEP 6 – Administer item pool to a developmental sample (e.g., year-old club soccer players at an Midwest ODP Regional Camp). STEP 7 – Evaluate items (e.g., use EFA and other analyses to check reliability & validity). STEP 8 – Optimize scale length (i.e., goal is 4-5 items per subscales for all dimensions to maximize instrument reliability).

11 STEP 1 - DETERMINE WHAT YOU WANT TO MEASURE
Do you base the instrument on theory or create your own conceptual framework? A good theory can be helpful in developing items. If theory is not available, develop a conceptual framework for your scale. Specificity helps clarity, so decide if you want to be more general or have greater specificity. Rotter’s (1966) internal-external scale is “general” comparing internal versus external sources of control factors, while Levenson’s (1973) multidimensional scale measures more specific dimensions/factors (i.e., person, powerful others, and fate) as sources of control. source

12 STEP 1 - DETERMINE WHAT YOU WANT TO MEASURE
Wallston, Wallston & DeVissis (1978) developed the Multidimensional Health Locus of Control Scale based on 3 locus of control dimensions which can be made specific to a variety of medical conditions (e.g., diabetes). Specificity can focus on outcomes (e.g., better health, business efficiency), content (e.g., anxiety), setting (e.g., school vs work), and populations (e.g., children vs adults). Make sure your instrument only measures a specific construct and doesn’t also measure other constructs inadvertently. source

13 SPECIFICITY CASE STUDY COMPETITIVE ANXIETY
Martens, Burton , Vealey, Bump & Smith (1990) developed and validated the Competitive State Anxiety Inventory-2 (CSAI-2) which for over 30 years has been the major tool to assess state anxiety in sport. The major problem with the CSAI-2 is that some of the items which are symptoms of physical and mental state anxiety can also measure other positive emotions (e.g., excitement or confidence). Several researchers have developed a valence scale to the CSAI-2 so athletes could also rate how much their symptoms are facilitative or debilitative to performance. source

14 SPECIFICITY CASE STUDY COMPETITIVE ANXIETY
My colleagues and I disagree with the valence scale approach because by definition anxiety is a negative emotion that is debilitating to performance. The only way to measure state anxiety independently from other positive emotions with similar symptoms is to develop a new instrument. The development of the CSAI-3 is a complex 3-stage process. Stage 1 is to develop new items that measure 6 dimensions of state anxiety, including: worry, motivation, focus, arousal, bodily tension and affect. source

15 SPECIFICITY CASE STUDY COMPETITIVE ANXIETY
We collected data from a large sample of athletes and analyze the results to determine which items meet selection criteria. Stage 2 is to identify how many items in the initial item pool would be considered debilitative to performance by 70% or more of athletes. Again we collect data from a large sample and analyze results to determine how to revise the item pool to only include ‘true anxiety’ items. Stage 3 revises the item pool a 2nd time and the instrument is subjected to multiple validation studies to assess overall construct validity. source

16 TYPES OF INSTRUMENT VALIDATION
“Face Validity” – wording of items seems to be measuring construct of interest (i.e., anxiety). “Factorial Validity” – subjecting data to Exploratory Factor Analysis (EFA) and then Confirmatory Factor Analysis (CFA) demonstrates that items group together consistent with hypothesized predictions “Concurrent Validity” – the instrument is taken concurrently with other instruments that are expected to relate to the target construct (i.e., anxiety) in both positive & negative ways and different relationship magnitudes. sourceCo

17 TYPES OF INSTRUMENT VALIDATION
“Predictive validity” -- makes conceptual predictions about relationships between the CSAI-3 and related constructs and then tests those predictions. “Construct validity” – examines causal relationships by using intervention studies to examine whether a program to reduce anxiety actually lowers CSAI-3-measured anxiety levels. sourceCo

18 TYPES OF ANALYSES TO VALIDATE INSTRUMENTS
Item-to-Subscale Correlations – individual items should have a correlation of .70 with their dimension or subscale. Exploratory Factor Analysis (EFA) – groups together items that are responded to in a similar way. If items don’t demonstrate a factor coefficient of .50 or greater and with no cross-loadings greater than .25, they are eliminated from the scale or modified to improve item quality. sourceCo

19 TYPES OF ANALYSES TO VALIDATE INSTRUMENTS
Confirmatory Factor Analysis (CFA) – tests how well items fit the measurement model for the construct developed through EFA. Multiple fit indices are used to assess various types of model fit. Multivariate Analysis of Variance (MANOVA) – is used to examine if the construct differs across groups (i.e., growth & fixed mindsets) in ways predicted by theory or previous research. Alpha Internal Consistency Reliability – this form of reliability tests the strength of the relationships between items comprising a dimension or subscale. Values above .70 are desired. sourceCo

20 FUNDAMENTAL RESEARCH STRATEGIES
Precision of Measurement Generalizabilityof Results Reality of Measurement McGrath (1975)

21 The End


Download ppt "SURVEYS VERSUS INSTRUMENTS"

Similar presentations


Ads by Google