Presentation is loading. Please wait.

Presentation is loading. Please wait.

Presented at the Annual Forensic Examiner Training, Honolulu, Hawaii,

Similar presentations


Presentation on theme: "Presented at the Annual Forensic Examiner Training, Honolulu, Hawaii,"— Presentation transcript:

1 Quality of Forensic Reports: An Empirical Investigation of Three Panel Reports
Presented at the Annual Forensic Examiner Training, Honolulu, Hawaii, March 15, 2005.

2 Quality of Forensic Reports: An Empirical Investigation of Three Panel Reports
Marvin W. Acklin Ph.D., Department of Psychiatry, JABSOM Reneau C. Kennedy Ed. D. Adult Mental Health Division Richard Robinson, Argosy University Bria Dunkin, Argosy University Joshua Dwire M.S., Argosy University Brian Lees, Argosy University

3 Quality of Forensic Reports: An Empirical Investigation of Three Panel Reports
With special assistance from: The Honorable Marsha J. Waldorf Judge Waldorf’s Law Clerks Teresa Morrison Kirsha Durante Crystal Mueller, AMHD, Forensic Services Research

4 Logic “Studies have uniformly concluded that judges typically defer to the opinions of examiners, with rates of examiner-judge agreement often exceeding 90%. Judges typically rely solely on examiners’ written reports and, hence, the quality of the data and reasoning presented in such reports become a critical part of the CST adjudication process” (Skeem & Golding, 1988, p.357).

5 Method We utilized the rationale of Skeem and Golding (Skeem, Golding, Cohn, & Berge, 1998; Skeem & Golding, 1998). We examined factors which reflect the quality of forensic reports. Skeem and Golding identified a number of factors linked to the quality of forensic reports: methods, opinions, and rationale for forensic opinions.

6 Purpose To examine a representative sample of three panel reports submitted to the 1st Circuit Court Judiciary. To assess factors related to quality and reliability.

7 Method: Study 1 Sample We examined 50 felony cases adjudicated in the 1st Circuit Court. Inclusion criteria included a full set of three panel reports and a finding regarding fitness by the court. The 50 cases were drawn at random from 416 files stored at the 1st Circuit Court.

8 Method: Studies 2 & 3 Utilizing the same selection procedure as the larger study, we examined two subsets of the 416 files. Three panel examinations prior to a finding of Not Guilty by Reason of Insanity (NGRI: n = 10). Three panel examinations ordered after a request for Conditional Release (CR: n = 10).

9 Method: Study 4 Records from Prosecuting Attorney City & County of Honolulu for

10 Procedures: Inter-rater Agreement
A coding manual of pertinent items was created and the terminology utilized was refined. Pre-coding trainings were conducted utilizing three reports to refine the understanding of items in coding manual and to familiarize coders with the formats of the forensic reports. An inter-rater reliability trial (IRRT) was conducted using five files (15 reports). The results of the first IRRT were:

11 Procedure: Inter-rater Agreement
Results of the 1st IRRT were: Mean kappa .80 Range .13 to 1.0 kappa range Examiner 1, to 1.0 Examiner 1, to 1.0 Examiner 1, to 1.0 Examiner 2, to 1.0 Examiner 2, to 1.0 Examiner 3, to 1.0

12 Procedure: Inter-rater Agreement
A second inter-rater reliability trial was conducted (IRRT) using five files (15 reports) to refine coding criteria. The results of the second IRRT were as follows: Mean kappa .95 Range .55 to 1.0

13 Inter-rater Agreement Coefficients (Second IRRT) 5 Cases, 3 Raters
Item Description kappa Professional Credential of Examiner 1.00 Criminal Classification Is the case caption visible? Is the charge visible? 0.71 Examiner's opinion of competency to stand trial Examiner provides rationale of competency to stand trial opinion 0.75 Examiner mentions specific impairment in relation to competency to stand trial opinion Examiner's opinion about responsibility Examiner gives rationale for responsibility opinion 0.70 Examiner's opinion regarding dangerousness Examiner provides rationale of opinion on dangerousness 0.83 Examiner provides suggestion for managing dangerousness Methods: Records reviewed Methods: Interview Methods: Collateral source, attorney Methods: Collateral source, other Methods: Psychological testing Methods: Forensic instruments used Judicial determination of competency to stand trial Judicial determination of dangerousness Ease of extraction of data

14 Study 1: Three Panel Evaluations CST/CResp/Danger

15 Number of Different Examiners

16 Credential of Examiner
N = 416 cases n = 150 reports

17 Classification of Criminal Offence
N = 416 cases n = 150 reports

18 Case Caption Visible? N = 416 cases n = 150 reports

19 Charge Visible? N = 416 cases n = 149 reports

20 Examiner Opinion on Competency to Stand Trial (CST)
N = 416 cases n = 150 reports

21 Examiner Rationale for CST Opinion
N = 416 cases n = 150 reports

22 Mention of Specific Impairment in Relation to CST Opinion
N = 416 cases n = 150 reports

23 Examiner Opinion Concerning Criminal Responsibility
N = 416 cases n = 150 reports

24 Examiner Rationale for Criminal Responsibility Opinion
N = 416 cases n = 150 reports

25 Examiner Opinion Regarding Dangerousness
N = 416 cases n = 150 reports

26 Examiner Rationale for Dangerousness Opinion
N = 416 cases n = 150 reports

27 Examiner Suggestions for Managing Dangerousness/Risk Reduction
N = 416 cases n = 150 reports

28 Evaluation Methods N = 416 cases n = 150 reports

29 Judicial Determination of CST
N = 416 cases n = 150 reports

30 Ease of Data Extraction
N = 416 cases n = 149 reports

31 Majority/Unanimity Agreement
CST 58% = 100% agreement 32% = At least 2 examiners & judge agree 4% = At least 1 examiner & judge agree 6% = Examiners agree, Judicial determination differs

32 Inter-examiner Agreement
CST Mean kappa .42 Range .36 to .55 Responsibility Mean kappa .41 Range .39 to .45 Dangerousness Mean kappa .25 Range .14 to .36

33 Judge-Examiner Agreement
CST Mean kappa .49 Range .39 to .60 Dangerousness Cases where examiner opinion on dangerousness was not ordered were excluded from calculation. Judicial Determination on all cases was “No Determination” Mean kappa .05 Range .05 to .10

34 Summary of Findings and Recommendations: Study 1

35 Study 2: Three Panel Evaluations Prior to NGRI

36 Number of Different Examiners

37 Credential of Examiner
N = 416 cases n = 30 reports

38 Classification of Criminal Offence
N = 416 cases n = 30 reports

39 Examiner Opinion Concerning Criminal Responsibility
N = 416 cases n = 30 reports

40 Examiner Rationale for Criminal Responsibility Opinion
N = 416 cases n = 30 reports

41 Examiner Opinion Regarding Dangerousness
N = 416 cases n = 30 reports

42 Examiner Rationale for Dangerousness Opinion
N = 416 cases n = 30 reports

43 Examiner Suggestions for Managing Dangerousness/Risk Reduction
N = 416 cases n = 30 reports

44 Evaluation Methods N = 416 cases n = 30 reports

45 Majority/Unanimity Agreement
CST 70% = 100% agreement 30% = At least 2 examiners and judge agree Dangerousness 20% = 100% agreement 40% = At least 1 examiner and judge agrees 10% = Two examiners agree, judge and 3rd examiner opinion differed completely

46 Inter-examiner Agreement
CST Mean kappa .61 Range .46 to .76 Dangerousness Cases where opinion on dangerousness was not asked for were not calculated Mean kappa .17 Range to .40

47 Judge-Examiner Agreement
CST Mean kappa .79 Range .60 to 1.0 Dangerousness Mean kappa .24 Range .15 to .24

48 Summary of Findings and Recommendations: Study 2

49 Study 3: Three Panel Evaluations Prior to CR

50 Number of Different Examiners

51 Credential of Examiner
N = 416 cases n = 30 reports

52 Classification of Criminal Offence
N = 416 cases n = 30 reports

53 Examiner Opinion Regarding Conditional Release
N = 416 cases n = 30 reports

54 Examiner Rationale for Conditional Release Opinion
N = 416 cases n = 30 reports

55 Examiner Opinion Regarding Dangerousness
N = 416 cases n = 30 reports

56 Examiner Rationale for Dangerousness Opinion
N = 416 cases n = 30 reports

57 Examiner Opinion Regarding Level of Dangerousness
N = 416 cases n = 27 reports

58 Examiner Suggestions for Managing Dangerousness/Risk Reduction
N = 416 cases n = 30 reports

59 Examiner Provides Time Frame for Dangerousness Opinion
N = 416 cases n = 30 reports

60 Evaluation Methods N = 416 cases n = 30 reports

61 Judicial Determination of CR
N = 416 cases n = 30 reports

62 Judicial Determination of Dangerousness
N = 416 cases n = 30 reports

63 Treatment Recommendation of CR Population
N = 416 cases n = 30 reports

64 Ease of Data Extraction
N = 416 cases n = 30 reports

65 Majority/Unanimity Agreement
CR 20% = 100% agreement 60% = At least 2 examiners and judge agree 10% = At least 1 examiner and judge agree 10% = Examiners agree, judicial determination differs Dangerousness 40% = Examiners agree, judicial determination differs 20% = At least 1 examiner and judge agree 40% = Two examiners agreed, judge and 3rd examiner opinion differed completely

66 Inter-examiner Agreement
CR Mean kappa .30 Range .11 to .41 Dangerousness Mean kappa .44 Range

67 Judge-Examiner Agreement
CR Mean kappa .36 Range Dangerousness Mean kappa .13 Range

68 Summary of Findings and Recommendations: Study 3

69 Summary Strengths of reports Weakness of reports

70 NGRI Some have asserted that there is a higher rate of NGRI acquittals in Hawaii. Nationally between 1.5% and 2.5% of felony defendants raise the insanity defense (Melton, Petrila, Poythress, & Slobogin, 1997, p. 188) Of those only about 25% are successful (Melton, Petrila, Poythress, & Slobogin, 1997, p. 188) In Hawaii our rates of NGRI are consistent with the national average as reported by the Honolulu Prosecutors Office.

71 NGRI-City & County of Honolulu 2000-2004
Calendar Year Total Felony Cases Convictions Acquittal Acquittal (Insanity) NGRI Percentage 2000 2057 1992 58 7 .034% 2001 2149 2090 39 20 .93% 2002 2108 2044 25 1.2% 2003 2218 2155 48 15 .68% 2004 2186 2120 46 .91% Totals 10, 718 10, 331 230 87 .81%

72 Biases Lack of standardization: examiner/method/criterion variance
So much variability between examiners that the reports were not helpful to the trier of fact Overall quality of reports will be poor Ph. D. & Psy. D. write better reports than M.D.

73 Conclusion/Recommendations

74 References Hall, H. (2002, Summer). What ails the insanity defense in Hawaii?. Hawaii Psychologist, 46, 12. Melton, G., Petrila, J., Poythress, N., & Slobogin, C. (1997). Psychological Evaluations for the Courts. New York: Gilford Press. Skeem, J., Golding, S., Cohn, N., & Berge, G. (1998). The logic and reliability of evaluations of competence to stand trial. Law and Human Behavior, 22, Skeem, J., & Golding, S. (1998). Community examiners’ evaluations of competence to stand trail: Common problems and suggestions for improvement. Professional Psychology: Research and Practice, 29,


Download ppt "Presented at the Annual Forensic Examiner Training, Honolulu, Hawaii,"

Similar presentations


Ads by Google