Coding with R-PAS: Does Prior Training with the Exner Comprehensive System Impact Interrater Reliability Compared to Those Examiners with Only R-PAS Based.

Slides:



Advertisements
Similar presentations
Project VIABLE: Behavioral Specificity and Wording Impact on DBR Accuracy Teresa J. LeBel 1, Amy M. Briesch 1, Stephen P. Kilgus 1, T. Chris Riley-Tillman.
Advertisements

Standard 22A Curricular Structure HT Accredited Curriculum.
Teacher Evaluation Model
Fundamentals of Psychological Testing PSYC 4500: Introduction to Clinical Psychology Brett Deacon, Ph.D. October 8, 2013.
MGT-491 QUANTITATIVE ANALYSIS AND RESEARCH FOR MANAGEMENT
4/30/20151 Quality Assurance Overview. 4/30/20152 Quality Assurance System Overview FY 04/05- new Quality Assurance tools implemented, taking into consideration.
Student Growth Objective (SGO) Evaluating SGO Quality
Funded through the ESRC’s Researcher Development Initiative Department of Education, University of Oxford Session 3.3: Inter-rater reliability.
Oral Presentation Rubrics Standards-based Assessment of and for Learning.
TELL Colorado Post-Survey Webinar Andrew Sioberg New Teacher Center.
Research Paper Critical Analysis Research Paper Critical Analysis 10 ways to look at a research paper systematically for critical analysis.
Does first aid training prevent workplace accidents? Helen Lingard Faculty of Architecture, Building and Planning The University of Melbourne.
Examination of Holland’s Predictive Pattern Order Hypothesis for Academic Achievement William D. Beverly and Robert A. Horn Northern Arizona University,
Using Growth Models for Accountability Pete Goldschmidt, Ph.D. Assistant Professor California State University Northridge Senior Researcher National Center.
University of Missouri-Columbia Middle Level Leadership Center Mary E. Douglass Doctoral Student Educational Leadership & Policy Analysis University of.
Chapter 6 Training Evaluation
Chapter 7 Correlational Research Gay, Mills, and Airasian
Psychometric assessment An unending journey to self development Sindhu P Aravindakshan.
Grade 12 Subject Specific Ministry Training Sessions
Learners’ Internal Management of Cognitive Processing in Online Learning Chun-Ying Chen Department of Electronic Commerce Transworld Institute of Technology,
Assessing and Evaluating Learning
1 The New York State Education Department New York State’s Student Reporting and Accountability System.
Materials developed in collaboration with the Office of English Language Learners, New York City Department of Education RTI Model for ELL Academic SuccessLesaux,
Validity and Reliability Dr. Voranuch Wangsuphachart Dept. of Social & Environmental Medicine Faculty of Tropical Medicine Mahodil University 420/6 Rajvithi.
Lisa J. Goodnight, Ph.D. Ralph V. Rogers, Ph.D. Purdue University Calumet Hammond, IN A New Model for Faculty Collaboration: Ensuring Rigorous Curriculum.
Aligning the P-16 Curriculum to Improve Science Teacher Preparation Michael Odell, Ph.D. John Ophus, Ph.D. Teresa Kennedy, Ph.D. Jason Abbitt, Ph.D. Kristian.
Maggie Selander, Julia Martin, Marie Ware, Caroline Lopez, Lynn.
Quality assurance activities at EUROSTAT CCSA Conference Helsinki, 6-7 May 2010 Martina Hahn, Eurostat.
Classroom Assessment A Practical Guide for Educators by Craig A. Mertler Chapter 7 Portfolio Assessments.
Standard 9 - Assessment of Candidate Competence Candidates preparing to serve as professional school personnel know and demonstrate the professional knowledge.
Student assessment Assessment tools AH Mehrparvar,MD Occupational Medicine department Yazd University of Medical Sciences.
1 An Introduction to the SIOP Model Sheltered Instruction Observation Protocol.
Student assessment Assessment tools AH Mehrparvar,MD Occupational Medicine department Yazd University of Medical Sciences.
1 The New York State Education Department New York State’s Student Data Collection and Reporting System.
1 Review of current assessment instruments in Northern Ireland Introduction Aims/Objectives Methods –Instrument collection 11 Trusts –Instrument analysis.
NONTRADITIONAL STUDENTS IN COMMUNITY COLLEGES AND THE MODEL OF COLLEGE OUTCOMES FOR ADULTS Applied Technology, Training and Development University of North.
Chapter 6 Training Evaluation
FALCON Meeting #3 Preparation for Harnett County Schools Thursday, March 8, 2012.
VALUE/Multi-State Collaborative (MSC) to Advance Learning Outcomes Assessment Pilot Year Study Findings and Summary These slides summarize results from.
An Analysis of Three States Alignment Between Language Arts and Math Standards and Alternate Assessments Claudia Flowers Diane Browder* Lynn Ahlgrim-Delzell.
Sharon M. Livingston, Ph.D. Assistant Professor and Director of Assessment Department of Education LaGrange College LaGrange, GA GaPSC Regional Assessment.
NCATE STANDARD I STATUS REPORT  Hyacinth E. Findlay  March 1, 2007.
Measurement MANA 4328 Dr. Jeanne Michalski
Communicating Marketing Research Findings
MWSD. Differentiated Supervision Mode (DSM)  Reference Pages in Plan Book 8-16 Description of Differentiated Mode Relevant Appendices 34 Teacher.
1 Scoring Provincial Large-Scale Assessments María Elena Oliveri, University of British Columbia Britta Gundersen-Bryden, British Columbia Ministry of.
Chapter 6 - Standardized Measurement and Assessment
Assessment of Student Learning: Phase III OSU-Okmulgee’s Evidence of Student Learning.
Stetson University welcomes: NCATE Board of Examiners.
Gathering Feedback for Teaching Combining High-Quality Observations with Student Surveys and Achievement Gains.
ACS WASC/CDE Visiting Committee Final Presentation Panorama High School March
Donna Lipscomb EDU 695 MAED Capstone Common Core Presentation INSTRUCTOR KYGER MAY 21, 2015.
The Clinical Utility of the LANSE- A and LANSE-C Jennifer L. Harrison, M.A., Megan Pollock, M.A., Amy Mouanoutoua, M.A. Ashley Brimager, M.A., & Paul C.
PLCs Professional Learning Communities Staff PD. Professional Learning Committees The purpose of our PLCs includes but is not limited to: teacher collaborationNOT-
OBJECTIVE INTRODUCTION Emergency Medicine Milestones: Longitudinal Interrater Agreement EM milestones were developed by EM experts for the Accreditation.
8 Experimental Research Design.
NCATE Unit Standards 1 and 2
DAY 2 Visual Analysis of Single-Case Intervention Data Tom Kratochwill
CHS AP Psychology Unit 10: Personality
The assessment process For Administrative units
Trisha M. Kivisalu, M.A., Jennifer L. Harrison, M.A.,
Elayne Colón and Tom Dana
RELIABILITY OF QUANTITATIVE & QUALITATIVE RESEARCH TOOLS
2015 PARCC Results for R.I: Work to do, focus on teaching and learning
PSY 614 Instructor: Emily Bullock, Ph.D.
Quantitative vs. Qualitative Research Method Issues
Overview of Assessment in Education
Mapping the ACRL Framework and Nursing Professional
Incorporating NVivo into a Large Qualitative Project
Behavior Rating Inventory of Executive Function (BRIEF2): Analyzing and Interpreting Ratings from Multiple Raters Melissa A. Messer1, MHS, Jennifer A.
Presentation transcript:

Coding with R-PAS: Does Prior Training with the Exner Comprehensive System Impact Interrater Reliability Compared to Those Examiners with Only R-PAS Based Training? Jennifer H. Lewey, M.A., Trisha M. Kivisalu, M.A., Thomas W. Shaffer, Ph.D., ABPP, & Merle L. Canfield, Ph.D. California School of Professional Psychology at Alliant International University Introduction The Rorschach Inkblot Method (RIM) is a free response personality assessment instrument widely used in both clinical and research settings worldwide. While multiple coding systems exist, the predominant system utilized has been the Comprehensive System (CS). Published in 2011 the Rorschach-Performance Assessment System (R-PAS) provides detailed guidelines for test administration and coding responses. There are a number of variable differences between these coding systems. Response-level agreement is essential for determining precision related to coders’ ability to apply the same categorical coding scheme to responses, and ascertain useful information for monitoring training and practice. Interrater agreement research is important in that it reflects the degree of consistency among raters, which serves as a precursor for accuracy in using the Rorschach as a clinical assessment tool (Sahly, Shaffer, Erdberg, & O’Toole, 2011). This investigation examines interrater reliability between two unique examiner groups: (1) those graduate student examiners with previous Comprehensive System training background who are now trained in R-PAS, and (2) those graduate students trained solely with the R-PAS. The purpose of this study was two-fold: (1) to ascertain if there was a difference in interrater reliability between these with varied training, and (2) to identify particular strengths and weaknesses that could lead to informing future instruction on coding with these two interpretive systems for the Rorschach in clinical and academic settings. Methodology The current study analyzed a non-clinical convenience sample of 16 participants representing a total of 364 responses. All examiners were psychology doctoral students enrolled in an advanced research practicum course at Alliant International University in Fresno, California. Data for the sample was collected from January 2013 through May 2014, as part of a normative data collection study. Graduate students utilized a blind coding method and any differences were resolved in order to achieve interrater agreement. Discrepancies between the initial and blind coders’ ratings were analyzed for each variable with SPSS and expressed as percent agreement and kappa values. Table 2 Cicchetti’s Guidelines for Interpreting Kappa (κ) Results Findings for each value are presented in Table 1. Cicchetti’s (1994) guidelines, presented in Table 2, were used for interpreting kappa values. Overall, rates of agreement and kappa values were mixed for both examiners who were trained in CS & R-PAS and solely R-PAS (See Table 3). Table 3 Select Variables Highlighting the Respective Training with Higher Concordance Rates *Note: *Denotes summation of variables for the Location category. Discussion In summary, the findings indicate minimal interrater reliability differences across those trained in CS and R-PAS and solely R-PAS suggesting that regardless of prior training, when using the detailed coding system designed by R-PAS developers, R-PAS variables can be coded reliably among raters. In particular, select variables may have higher concordance rates due to familiarization with the same variables in the Comprehensive System. Conversely, this may confound reliability rates for other variables that exist solely in R-PAS. These results emphasize the importance of training and blind coder practice to ensure coding accuracy for this empirically based system of coding. Overall, interrater reliability is essential to ensure accuracy of coding and provides insights for future training with graduate students and professionals. Kappa Values CategoryPoorFairGoodExcellent Value< – 1.00 Table 1 Comparison of CS & R-PAS versus R-PAS Trained Only Rorschach Interrater Agreement CS & R-PASR-PAS Only VariableAgreement (%)κ κ Location W D Dd Dd Space SR SI Content Class H (H) Hd (Hd) A (A) Ad (Ad) An Art Ay Bl Cg Ex Fi Sx NC Synthesis/Vagueness Sy Vg Pair Form Quality FQo FQu FQ Popular P Determinants M FM m a p FC CF C C’ V Y T--- FD r F Cognitive Codes DV DR DR INC INC FAB FAB PEC Thematic Codes ABS PER COP MAH MAP AGM AGC MOR ODL VariableCS & R-PAS R-PAS Only Location Location* SpaceSISR Content Class(H)Hd (A)(Hd) AyArt SxCg NC Synthesis/VaguenessVgSy Form QualityFQu PopularP Determinants A FC CF MC RC’ Thematic CodesAGMMAH ODLAGC