Developing a Hiring System Reliability of Measurement.

Slides:



Advertisements
Similar presentations
STAFFING. KEY ASSUMPTIONS ä People differ ä Jobs differ ä Goal? ä ä Requires ä.
Advertisements

1 COMM 301: Empirical Research in Communication Kwan M Lee Lect4_1.
Part 4 Staffing Activities: Selection
VALIDITY AND RELIABILITY
Part II Sigma Freud & Descriptive Statistics
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
Managing Human Resources, 12e, by Bohlander/Snell/Sherman © 2001 South-Western/Thomson Learning 5-1 Managing Human Resources Managing Human Resources Bohlander.
Part II Knowing How to Assess Chapter 5 Minimizing Error p115 Review of Appl 644 – Measurement Theory – Reliability – Validity Assessment is broader term.
Reliability, Validity, Trustworthiness If a research says it must be right, then it must be right,… right??
Chapter 4 Validity.
Concept of Measurement
Concept of Reliability and Validity. Learning Objectives  Discuss the fundamentals of measurement  Understand the relationship between Reliability and.
Lecture 7 Psyc 300A. Measurement Operational definitions should accurately reflect underlying variables and constructs When scores are influenced by other.
PSYCHOMETRICS RELIABILITY VALIDITY. RELIABILITY X obtained = X true – X error IDEAL DOES NOT EXIST USEFUL CONCEPTION.
SELECTION & ASSESSMENT SESSION THREE: MEASURING THE EFFECTIVENESS OF SELECTION METHODS.
C h a p t e r PART TWO - STAFFING THE ORGANIZATION Selecting Employees 8.
Validity of Selection. Objectives Define Validity Relation between Reliability and Validity Types of Validity Strategies.
Research Methods in MIS
Educational Assessment
Staffing & Strategy Effectively done, staffing has an impact on the bottom line (ineffectively done, also impacts) Financial investment in people should.
Classroom Assessment A Practical Guide for Educators by Craig A
Reliability of Selection Measures. Reliability Defined The degree of dependability, consistency, or stability of scores on measures used in selection.
Measurement and Data Quality
Work in the 21st Century Chapter 2
Educational Research: Competencies for Analysis and Application, 9 th edition. Gay, Mills, & Airasian © 2009 Pearson Education, Inc. All rights reserved.
Data Analysis. Quantitative data: Reliability & Validity Reliability: the degree of consistency with which it measures the attribute it is supposed to.
Copyright © 2012 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 14 Measurement and Data Quality.
I/O Psychology Research Methods. What is Science? Science: Approach that involves the understanding, prediction, and control of some phenomenon of interest.
1 SELECTION 2BC3 Week 5 ________________________ Dr. Teal McAteer DeGroote School of Business McMaster University.
LECTURE 06B BEGINS HERE THIS IS WHERE MATERIAL FOR EXAM 3 BEGINS.
Foundations of Recruitment and Selection I: Reliability and Validity
SELECTION. SELECTION PROCESS BY WHICH AN ORGANIZATION CHOOSES PERSONS WHO BEST MEET THE SELECTION CRITERIA FOR THE POSITION AVAILABLE, CONSIDERING CURRENT.
Chapter 7 Selection Group 7 August 24, Employee Selection Selection is the process of choosing from a group of applicants those individuals best.
Managing Human Resources, 12e, by Bohlander/Snell/Sherman © 2001 South-Western/Thomson Learning 5-1 Managing Human Resources Managing Human Resources Bohlander.
Chapter Seven Measurement and Decision-Making Issues in Selection.
Reliability & Validity
Validity Is the Test Appropriate, Useful, and Meaningful?
Human Resource Management Lecture 09
Thomson South-Western Wagner & Hollenbeck 5e 1 Chapter Sixteen Critical Thinking And Continuous Learning.
Copyright © 2008 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 17 Assessing Measurement Quality in Quantitative Studies.
Validity and Item Analysis Chapter 4.  Concerns what instrument measures and how well it does so  Not something instrument “has” or “does not have”
McGraw-Hill/Irwin © 2012 The McGraw-Hill Companies, Inc. All rights reserved. Obtaining Valid and Reliable Classroom Evidence Chapter 4:
PSY 231: Introduction to Industrial and Organizational Psychology
Uniform Guidelines on Employee Selection Uniform Framework for employment decisions -- apply only to selection procedures for employment decisions Discrimination.
Measurement MANA 4328 Dr. Jeanne Michalski
©2011 Cengage Learning. All Rights Reserved. May not be scanned, copied or duplicated, or posted to a publicly accessible website, in whole or in part.
Developing a Hiring System Measuring Applicant Qualifications or Statistics Can Be Your Friend!
Measurement Experiment - effect of IV on DV. Independent Variable (2 or more levels) MANIPULATED a) situational - features in the environment b) task.
Strategy for Human Resource Management Lecture 15
Reliability and Validity in Testing. What is Reliability? Consistency Accuracy There is a value related to reliability that ranges from -1 to 1.
Validity & Reliability. OBJECTIVES Define validity and reliability Understand the purpose for needing valid and reliable measures Know the most utilized.
Michigan Assessment Consortium Common Assessment Development Series Module 16 – Validity.
WEEK 5 Staffing Activities: Selection Chapter 7: Measurement.
Dr. Jeffrey Oescher 27 January 2014 Technical Issues  Two technical issues  Validity  Reliability.
Copyright © 2014 Wolters Kluwer Health | Lippincott Williams & Wilkins Chapter 11 Measurement and Data Quality.
Data Collection Methods NURS 306, Nursing Research Lisa Broughton, MSN, RN, CCRN.
CRITICALLY APPRAISING EVIDENCE Lisa Broughton, PhD, RN, CCRN.
Measurement and Scaling Concepts
© 2013 by Nelson Education1 Foundations of Recruitment and Selection I: Reliability and Validity.
Ch. 5 Measurement Concepts.
Lecture 5 Validity and Reliability
Selecting Employees – Validation
CHAPTER 5 Methods for Assessing and selecting Employees
CHAPTER 4 Employee Selection
Outline the steps خطوات in the selection اختيار process عملية
Human Resource Management By Dr. Debashish Sengupta
پرسشنامه کارگاه.
Reliability and Validity of Measurement
5 6 Selecting Employees C H A P T E R Training Employees
Chapter 6 Selecting Employees
Presentation transcript:

Developing a Hiring System Reliability of Measurement

Key Measurement Issues Measurement is imperfect Reliability--how accurately do our measurements reflect the underlying attributes? Validity --how accurate are the inferences we draw from our measurements? –refers to the uses we make of the measurements

What is Reliability? The extent to which a measure is free of measurement error Obtained score = –True Score + –Random Error + –Constant Error

What is Reliability? Reliability coefficient = % of obtained score due to true score –e.g., Performance measure with r yy =.60 is 60% “accurate” in measuring differences in true performance Different “types” of reliability reflect different sources of measurement error

Types of Reliability Test-retest Reliability –Assesses stability (over time/situations) Internal Consistency Reliability –Assesses consistency of content of measure Parallel Forms Reliability –Assesses equivalence of measures –Inter-rater reliability is special case

Why Reliability is Critical Accuracy of decisions about individuals Reliability sets upper limit on its validity –Maximum r xy = SQRT (r xx r yy ) Example –Employment test with r xx =.80 –Performance ratings with r yy =.47 –Maximum r xy = SQRT [(.80 *.47)] =.61

Developing a Hiring System Validity of Measurement

What is Validity? The accuracy of inferences drawn from scores on a measure Example: An employer uses an honesty test to hire employees. –The inference is that high scorers will be less likely to steal. –Validation confirms this inference.

Validity vs. Reliability Reliability is a characteristic of the measure –Error in measurement –A measure either is or isn’t reliable Validity refers to the uses of the measures –Error in inferences drawn –May be valid for one purpose but not for another

Validity and Job Relatedness Federal regulations require employer to document job-relatedness of selection procedures that have adverse impact Good practice also dictates that selection decisions should be job-related Validation is the typical way of documenting job relatedness

Methods of Validation Empirical: showing a statistical relationship between predictor scores and criterion scores –showing that high-scoring applicants are better employees Content: showing a logical relationship between predictor content and job content –showing that the predictor measures the same knowledge or skills that are required on the job

Methods of Validation Construct: developing a “theory” of why a predictor is job-relevant Validity Generalization: “Borrowing” the the results of empirical validation studies done on the same job in other organizations

Empirical Validation Concurrent Criterion-Related Validation – Predictive Criterion-Related Validation –

Concurrent Validation Design Time Period 1 Test current employees Measure employee performance Validity?

Predictive Validation Design Time Period 1Time Period 2 Test applicants Hire applicants Obtain criterion measures Validity?

Empirical Validation: Limitations

Content Validation Inference being tested is that the predictor samples actual job skills and knowledge –not that predictor scores predict job performance Avoids the problems of empirical validation because no statistical relationship is tested –potentially useful for smaller employers

Content Validation: Limitations

Construct Validation Making a persuasive argument that hiring tool is job-relevant 1. Why attribute is necessary –job & organizational analysis 2. Tool measures the attribute – existing data usually provided by developer of tool

Construct Validation Example Validating FOCUS as measure of attention to detail (AD) for QC inspectors Develop rationale for importance of AD Defend FOCUS as measure of AD –Comparison of FOCUS scores with other AD tests –Comparison of FOCUS and related tests –Comparison of scores for people in jobs requiring high or low levels of AD –Evidence of validity in similar jobs

Construct Validation Example Validating an integrity (honesty) test Develop rationale for importance of honesty Defend test as measure of honesty –Comparison of test scores with other honesty measures Reference checks, polygraphs, other honesty tests –Comparison of test scores with related tests –Comparison of scores for “honest” and “dishonest” people –Evidence of validity in similar jobs

Validity Generalization Logic: A test that is valid in one situation should be valid in equivalent situations Fact: Validities differ across situations Why?

Validity Generalization 1.Situations require different attributes vs. 2.“Statistical artifacts”; differences in: Sample sizes Reliability of predictor and criterion measures Criterion contamination/deficiency Restriction of range Two possible explanations why validities differ across situations:

VG Implications Validities are larger and more consistent Validities are generalizable to comparable situations Tests that are valid for majority are usually valid for minority groups There is at least one valid test for all jobs It’s hard to show validity with small Ns

Validation: Summary Criterion-Related –Predictive –Concurrent Content Construct Validity Generalization “Face Validity”