Presentation is loading. Please wait.

Presentation is loading. Please wait.

Launching and Nurturing a Performance Management System G.S. (Jeb) Brown, Ph.D. Center for Clinical Informatics.

Similar presentations


Presentation on theme: "Launching and Nurturing a Performance Management System G.S. (Jeb) Brown, Ph.D. Center for Clinical Informatics."— Presentation transcript:

1 Launching and Nurturing a Performance Management System G.S. (Jeb) Brown, Ph.D. Center for Clinical Informatics

2 Performance… of what? who? 4 suggestions for performance criteria…. 1.Patient benefit is reason the system exists – performance criteria must relate directly to patient benefit. 2.Patients themselves are the best source of information on patient benefit 3.The goal of performance management is to understand what (or who) are the drivers outcomes and to use this information to improve outcomes for patients.

3 Who benefits? Patients!!!!!!!! Employers and other payers Clinicians, providers and behavioral healthcare organizations that can deliver high value services (as measured by outcome) The field as a whole…. Real world evidence of outcomes demonstrates the value of behavioral health services within the larger context of overall medical costs

4 Who loses? Providers, facilitates and clinicians that cannot demonstrate the effectiveness (value) of their services. Failure to measure performance protects the financial interest of the least effective providers.

5 Treatments or clinicians? Current trends in measuring performance are focused on “Evidenced Based Practices” – identify the most effective treatments and encourage their use. This is a good strategy if most of the variance in outcomes is due to the treatment…. But what if it isn’t? In order to manage performance it is first necessary to understand the primary sources of variance – what really drives outcomes?

6 Clinician effects Clinicians differ widely in their “effectiveness” resulting in wide differences in outcomes. –Results cannot be explained by theoretical orientation, treatment methods, years of training or experience. The effectiveness of all treatments, including medications, are mediated by clinician effects. Failure to measure and account for clinician effects on controlled studies or the real world is …….. BAD SCIENCE!

7 Primary barrier… the clinician Most clinicians believe that their outcomes are above average and their services are of high value, without the need to actually measure this Many clinicians feel discomfort at the thought that their performance might be evaluated by their patients via self report outcome questionnaires Many clinicians believe that a simple outcome questionnaire cannot provide useful information about their patients beyond what they obtain by forming their own clinical judgments.

8 Secondary barriers Faith in treatments (therapy methods, drugs) to deliver consistent and predictable outcomes Belief that the cost of the services is so low (relative to overall medical costs) that meaningful performance management isn’t cost effective Belief that meaningful performance management isn’t necessary to retain existing business or acquire new customers. Lack of organizational commitment to place the patient first and/or desire to avoid conflict with clinicians

9 Overview and agenda… 1.Information drawn from 5 performance management projects 1.Human Affairs International: 1996-1999 2.Brigham Young University Comprehensive Clinic: 1996 – present (Lambert & others) 3.PacifiCare Behavioral Health: 1999 – present 4.Resources for Living: 2001-present 5.Accountable Behavioral Health Care Alliance: 2002 - present

10 Overview - continued 2.Putting together an performance management system –Measures –JET (Just Enough Technology) –Software choices 3.Measurement and feedback methods –Case mix adjustment –Tracking trajectory of change –Reporting outcomes –Identifying high value clinicians

11 Overview - continued 4.Goldilocks effect –Cause: clinicians and patients exercising broad discretion in the method, intensity and duration of treatment. –Result: Patients tend to receive treatment that is “just about right”; not too much and not to little of a treatment that seems to work for them. –More is not always better. –Impact on dose (cost) benefit analyses; implications for cost management

12 Overview - continued 5.Clinicians effects –The impact of clinicians effects on treatment outcomes is the most important new research finding to immerge in the last few years –Recently published analyses of data from controlled studies and large samples of patients receiving “treatment as usual” within the community provide compelling evidence that the clinician may be the single most important factor driving the outcome. –Differences in clinician effectiveness not due to training or years of experience.

13 Overview - continued 6.Putting it all together, making it work –4 stages of development and implementation of outcomes management program. –Strategies for success & formulas for failure –Outcomes informed care: the client comes first; one client at a time. –Nurturing an outcomes informed organizational culture; (here’s hint – show the CFO the ROI)

14 Human Affairs International (HAI) Outcome Questionnaires: Outcome Questionnaire-45 & Youth Outcome Questionnaire (OQ-45 & YOQ) Michael Lambert, PhD of Brigham Young University spent six month sabbatical working onsite at HAI to develop a clinical information system Several hundred individuals clinicians and over 20 multidisciplinary group practices collected data between 1996 and 1999. Magellan Health Services acquired HAI and discontinued the program.

15 BYU Comprehensive Clinic Outcome measures: OQ-45 & YOQ Services university population Lambert and colleagues have conducted numerous studies on the use feedback to enhance outcomes – one client at a time.

16 PacifiCare Behavioral Health (PBH) PBH (now a part of United Behavioral Health) manages behavioral health care for over 5,000,000 covered lives annually Over 100 multidisciplinary clinics and 12,000 psychotherapists participating. Outcome measures: Life Status Questionnaire & Youth Life Status Questionnaire Measure voluntarily completed by 80% of all clients. Other research consultants: Lambert & Burlingame (BYU); Wampold (U of Wisc – Madison); Ettner (UCLA); Doucette, (GWU)

17 Resources for Living (RFL) Provides telephonic EAP services, data collected over the phone at time of service; clinicians receive real time feed back on trajectory of improvement and working alliance (SIGNAL system) Outcome measures: Outcome Rating Scale (4 items); also utilizes the Session Rating Scale (4 items) to the working alliance Other research consultants: Miller and Duncan, Institute for the Study of Therapeutic Change

18 Accountable Behavioral Healthcare Alliance (ABHA) Managed behavioral healthcare organization servicing Oregon Health Plan members in 5 rural county area Outcome measure: Oregon Change Index (4 items; based on the Outcome Rating Scale) Other research consultants: Miller, Institute for the Study of Therapeutic Change

19 10 Guiding Principles 1.Measure to manage. 2.Management requires frequent feedback over time. 3.Keep it simple, make it matter. 4.Keep it brief, measure often. 5.Create benchmarks, compare results.

20 10 Guiding Principles - continued 6.Minimize opportunity for feedback induced bias. 7.Provide the right information at the right time to the right person to make a difference. 8.Build in the flexibility so that the system evolves with the experience of the users. 9.Maintain central control of data and reporting 10.Establish and protect a core data set.

21 Five Minute Rule If it takes more than five minutes to collect the data you’re in trouble. To manage outcomes, you need the collect the right data to measure and model the variance in outcomes. More data = more variance explained, but with diminishing returns Find the sweet spot – variance per minute Clinicians may be willing to collect more than 5 minutes worth of data if there is clear benefit Be parsimonious!!!!!

22 Measure often! Most of the change (better or worse) occurs in the first few weeks of treatment. Frequent measurement results in better detection of patients at risk for premature termination. PBH ask for data at 1 st, 3 rd and 5 th sessions, and every 5 th session thereafter BYU, RFL and ABHA collects outcome measure at every session

23 Selecting outcome measures Clinician completed scales are subject to feedback induced bias. Patient completed measures tend to show faster change in the near term and less change in the long term than clinician completed measures. Clinician perception of purposes of the measures can induce bias at the clinician level that is difficult/impossible to control for.

24 In search of variance In order to improve outcomes, it is necessary to understand the sources of variance in outcomes. The ability to measure sources of variances is limited by the reliability and validity of the measures. More data = greater reliability/validity = more variance explained More data = more time, more cost, more hassle and probably lower compliance

25 Variance per minute A little data goes a long way. A lot more data doesn’t provide proportionately more information. Fine tune the data set through item analysis and other methods to identify those measures (items) that provide the greatest psychometric information in the least amount of time. Optimize the variance per minute; find the organization’s “sweet spot”.

26 Maximizing reliability Reliability refers to consistency with which a set of items measures some variable of interest. Coefficient alpha reliability is a measure of internal consistency of the measure at one point in time. Test retest reliability assesses the stability of scores over time. Items that correlate highly with one another increases reliability. More items = greater reliability.

27 Item Response Theory Item Response Theory (IRT) uses different assumptions than classical test theory when optimizing items on a questionnaire Selects for items that provide information on change for patients with different levels of symptom severity Can be used to optimize test length – tends to result in shorter measures.

28 Finding the item # sweet spot More items = greater reliability; but only up to a point. OQ-45 has 45 items and reliability of.93 (coefficient alpha). OQ-30 has reliability of.93. 10 well selected items from OQ-30 have a reliability of.9 Outcome Rating Scale (4 items) has reported reliability of.8 to.9.

29 Validity Face validity matters!!! Does the questionnaire seem to be asking about the right things? Are these the kinds of problems that people seeking mental health services commonly report? Are these items that we expect to see improve as the result of treatment? If the items make sense to the patients, it probably a good set of items.

30 Global factor Items inquiring about symptoms and problems patients most commonly seek help for tend to correlate with one another. Example: Items about sadness correlate with items about anxiety. Both correlate with items about relationships. Factor analyses of of a variety of outcome measures reveals that most items are load on common factor (“global distress factor”).

31 Concurrent validity Due to existence of a global factor, all patient completed outcome questionnaires tend to correlate highly with one another. A global measure with an adequate sampling of symptom items will correlate highly with disease specific measures such as the Beck Depression Inventory or the Zung Anxiety Scale.

32 Multiple factors in children Child and adolescent measures may have more complex factor structure than adults. Separate factors for “externalizing” and “internalizing” symptoms. Global factor still the most dominant factor in child/adolescent measures.

33 JET: Just Enough Technology Outcomes management depends on information technology. Technology adds cost, complexity and risk of failure. Start modestly – use just enough technology to get the job done. Add complexity only as necessary. Beware of innovation induced paralysis.

34 Capturing the data Computers, PDAs and other devices are cool, but…. They are expensive, someone still needs to enter the data, and if the patient is expected to enter the data, someone has to teach the patient to use the device. Advantages of paper and pencil –Low cost –No instructions needed –Information immediately available to clinician –Easily scanned for data capture

35 Scanning solutions Teleform: High end fax to file solution for OCR and OMR; many advance features; ideal for enterprise level use. http://www.verity.com/ Remark: Scan to file with OCR and OMR; less costly than Teleform. http://www.principiaproducts.com/ Data capture vendors. http://www.scantron.com/ http://www.ceoimage.com/

36 Building a system Sophisticated outcomes management systems can be created using the off the shelve software. Example: PacifiCare ALERT system –Teleform for data capture –SAS for data warehousing and reporting –Microsoft Office (Word, Excel, Access) for reporting. –SAS commands and Visual Basic Scripts used to automate processes, such permitting SAS to output data to Excel for use in a mail merge process by Word to create reports for the clinicians.

37 How much should it cost? Cost for routine data collection and sophisticated reporting at all levels of the organization should be less than 1% of the cost of care……if you use JET!

38 Measuring change Outcomes are generally evaluated by comparing pre and post treatment test scores. Change score = Intake score – last score. “Intent to treat” method includes all cases with two or more assessments rather than only cases that “complete” treatment. Intent to treat method encourages clinicians to keep patients engaged in treatment.

39 Standardizing change scores Change scores are often reported as “effect size”. Preferred statistic for research reports. Effect size is usually calculated by dividing the change scores by the standard deviation of the outcome measure at intake. If adequate normative information is available on the outcome measure, there are advantages to using the the standard deviation of the outcome measure in a non treatment population.

40 Benchmarking outcomes Measuring outcomes is of little use without some basis of comparison. Are the outcomes good? Compared to what? Clinicians and organizations differ in the kids of cases they treat. Benchmarking outcome requires a method of accounting for differences in case mix.

41 Regression:a fact of life With any repeated measurement, regression artifacts are a fact of life. Scores are correlated across time. A test score at one point in time is the single best predictor of a score at a subsequent point in time. Patients with high scores and low scores will tend to have scores closer the the mean on subsequent measurement.

42 Regression implications Patients with high distress report greater overall change and greater change per session than low distress patients. Patients with scores in normal (non-clinical) range tend to report little improvement or even show increased distress overt time. Focusing treatment resources on patients with the most severe symptoms results in improved outcomes.

43 Case mix Case mix variables are those variables present at the beginning of the treatment episode that are predictive of the outcome Intake score accounts for 18% of variance in change scores in PBH data Addition of age, sex and diagnosis to predictive model accounts for < 1% additional variance

44 Benchmark Score Regression techniques used to model relationship between intake scores and patient variables (age, diagnosis) and the change measured in treatment. Benchmark Score: residualized change score (difference between predicted and actual effect size) Clinicians are ranked based on the mean Benchmark Scores for their cases.

45 Regression and case mix

46 At risk for poor outcome Patients with a poor initial response to treatment are at risk for a poor outcome due to the probability of unplanned treatment termination. A poor initial response to treatment is not a strong predictor of future response to treatment, so long as the patient remains in treatment.

47 Predicting change The single best predictor of a future test score is the most recent test score. Regression analysis reveals that the relationship between intake scores and subsequent test scores is generally linear, with large variance between the predicted and actual scores (residualized scores). Predicted trajectory of change can be estimated using simple regression formulas to predict scores at each measurement point.

48 Regression formulas

49 Trajectory of change graph

50 Past and future change Prior change is not highly predictive of future change. Odds of additional improvement remain good if the test score is in the clinical range and the patient remains engaged in treatment. Implication: Remain optimistic; prevent premature termination; keep patient engaged in treatment.

51 Sample analysis Cases began treatment in severe range with no improvement or worse by week 6. Average case in sample 5 points worse at week 6. Approximately half of these cases had no data after week 6. Those that continued in treatment averaged 10 points improvement after week 6!

52 Remain optimistic!

53 Goldilocks Effect Describes effects that are due to freedom of choice on the parts of clinicians and patients with regard to method, intensity and duration of treatment Present in data collected in naturalistic setting but not in controlled studies Most research on treatment outcomes has been designed to eliminate these effects in order to investigate a particular treatment at a particular intensity and duration.

54 Why Goldilocks? In the story of Goldilocks and the Three Bears, Goldilocks keep trying different things (chairs, porridge and bed) each time seeking the one that was just right for her. Clinicians and patients continuously make choices about about treatment method(s), frequency of sessions, and duration of treatment based on rate of improvement in prior sessions.

55 Goldilocks & QI Little attention has been given to the possible benefits of encouraging the Goldilocks Effect. Many quality improvement initiatives encourage use of “empirically validated treatments” and adherence to various treatment protocols, thus making the implicit assumption to quality is improved by limiting the Goldilocks Effect.

56 Hypothesized Mechanisms Patients seek treatment when level of distress is high. Utilization of services (intensity & duration) is a function of the patient’s level of distress and rate of improvement. Clinician/patient dyad make decisions in an ongoing, dynamic manner with regard to treatment methods, intensity and duration.

57 Goldilocks and utilization Length of treatment is a much a function of outcome as outcome is of length of treatment. Patients with rapid improvement have good outcomes while tending to utilize relative few services. Patients with slow rate of change tend to have worse outcomes and utilize more services. Result: More treatment appears to be associate with worse outcome.

58 Time in treatment and outcome Total time in treatment episode

59 Total sessions and outcome Total sessions in treatment episode

60 Intensity of services Measured by frequency of sessions. Goldilocks effect prediction: Patients with slow rate of change will tend to receive a higher frequency of sessions. Following slide confirms prediction.

61 Trajectory of change for patients with severe symptoms; high intensity => 1 session per week

62 Goldilocks and Utilization Management Goldilocks effect means the clinician/patient dyad tend to arrive at an appropriate length of treatment. The PBH ALERT system seeks a rational allocation of resources by encouraging utilization by those patients most likely to benefit. PBH implemented utilization on demand: more sessions authorized each time outcome questionnaire submitted. No change in the overall average length of treatment.

63 Reporting outcomes Case by case reporting to clinician helpful to prevent premature termination. Residualized change scores are used to control for differences in case mix. Residual score = predicted last score – actual last score. “Benchmark Score” (ABHA) or “Change Index Score” (PBH, RFL) Positive score means greater than average improvement.

64 Evaluating outcomes Mean residual change score used to rank clinicians or clinics/group practices based on outcomes. “Severity adjusted change” calculated by adding a provider’s mean residual score to average change for all cases in the database. Larger sample sizes yield better estimates of outcome. Use of confidence intervals avoids over interpretation of results from small sample sizes.

65 Sample: Comparing Results 90% confidence band

66 Sample Disaggregated Results

67 Bruce Wampold, Michael Lambert and others argue that researchers have ignored the individual therapist as a source of variance Therapists vary widely in “effectiveness” Not explained by therapy method, training, or years of experience Even in controlled studies, therapist effects account for more variance in outcomes than treatment method Therapists effects

68 Therapists Effects - continued Recent research provides strong evidence that therapist/psychiatrist effects have a significant impact on the effectiveness of medications, in particularly antidepressants Evidence suggest that use of medications may increase, rather than decrease, the variance due to the therapists….. Huh?

69 The (almost) Bell Curve Solo clinicians with sample sizes => 20 (PBH data)

70 Honors for Outcomes Selection Criteria: –Minimum of 10 cases with two Y/LSQ data points in past 3 years –Average patient change must be reliably above average: 65% confidence that the provider’s Change Index >0 –Change Index is a case-mix adjusted measure, compares outcomes to PBH’s large normative database Honors for Outcomes is updated quarterly Honors for Outcomes

71 Website

72 Honors for Outcomes - Search

73 Honors for Outcomes - Results

74 Honors for Outcomes depends on predictive validity of Honors rating; prior performance predicts future performance Question: Does a therapist’s outcomes with adults predict outcomes with children and adolescents? Implication if yes: Therapists’ effectiveness is likely to be global in nature rather than specific to age and or diagnostic group. Study Question 1

75 Question: Does a therapist’s outcomes with adults predict outcomes with children and adolescents on medications? Implication if yes: The therapist effectiveness of the therapists is apparently mediating the effect of the medication(s). Study Question 2

76 Use Honors for Outcomes methodology to rank clinicians based on their outcomes with adult patients only. Therapist included in the study if they treated at least one child/adolescent with psychotherapy only and one with psychotherapy plus medication. (929 Honors, 1352 Non-Honors) Compare outcomes for children and adolescents for Honors clinicians to other clinicians. Study Method

77 Result: Outcomes for adults predicts outcomes for children

78 Results after adjusting for intake score, age, sex, diagnosis and prior treatment history.

79 All diagnoses and medications

80 Children diagnosed with depression and treated with psychotherapy alone or in combination with an antidepressant

81 Depression & antidepressants

82 Clinician effects and feedback PBH ALERT system letters identifies patients at risk for premature termination. Impact of ALERT letters appears to be dependent on the effectiveness of the clinicians. Following graph presents outcomes for at risk cases treated clinicians with outcomes in top quartile compared to bottom quartile.

83 Therapist rank and impact of ALERT letters

84 Outcomes and cost

85 Value Index Value Index = Average effect size per $1000 expenditure (Effect Size/Cost of Care) x $1000

86 Case history # 1 Resources for Living (RFL) began using 4 item Outcome Rating Scale and Session Rating Scale in 2002 Telephonic counseling Baseline data collected for 5 months Baseline data used to create trajectory of change graphs Real time feedback provided to counselors via SIGNAL System

87 RFL Signal System

88 RFL Signal System: results Baseline period Training and feedback

89 Case history # 2 Accountable Behavioral Health Care Alliance (ABHA) switched from the OQ-30 used by PBH to a version of the 4 item ORS used by RFL and others. Questionnaire modified to become the Oregon Change Index (OCI) and utilized consistently from 2004 to present. Administered at every session in outpatient and day treatment settings. OCIs collected at over 80% of all sessions.

90 OCI Feedback After collecting baseline data throughout 2004 and early 2005. In mid 2005 ABHA initiated regularly weekly feedback at the clinician and supervisor level. Excel based Active Case Report contains data on all cases seen within the last 6 weeks. Report is updated and emailed to clinicians at the start of each week..

91 OCI Active Case Report

92 Trajectory of Change Graph

93 Outcomes trending upwards

94 Implications for clinicians Good news: The clinician matters!!!!!! All treatments (including medications!?) are only as effective as the clinicians delivering the treatment. Clinicians have an ethical responsibility to assess and improve their personal effectiveness as clinicians… they cannot rely on the treatments alone to be curative.

95 Implications for administrators & policy makers Exclusive focus on the effectiveness of treatments rather than clinicians limits the potential to improve outcomes. Administrators and policy makers have an obligation to consumers to assure that they have access to effective clinicians. Failure to monitor outcomes at the clinician level places consumers at risk.

96 Performance Management: Four Stages of Development 1.Preparation 2.Implementation 3.Performance feedback 4.Managing outcomes

97 Stage one: Preparation Goal: Put things in motion; avoid fatal errors. (see Formulas for Failure) Identification of stake holders and change agents. Articulation of vision, mission and purpose. Why are we doing this? Choice of measures Development of case mix model Prototyping of reports and decision support tools Training materials and education of providers.

98 Stage two: Implementation Goal: Get something up and running. Pilot system with sub set of willing high volume providers and clinics Refine reports and decision support tools based on feedback from users Monitor and provide feedback on data quality compliance with data collection protocols. Validate and refine case mix adjustment model.

99 Stage 3: Performance feedback Goal: Get clinicians use to receiving performance feedback. Provide performance feedback on continuous basis. Make direct comparisons across sites or providers; identify top performers. Institute remedial measures as necessary to improve data quality. Disseminate results; respond to concerns re data quality, validity of methods, etc.

100 Stage 4: Managing outcomes Goal: Measurably improve outcomes! Continued data analysis to explore opportunities for quality improvement Provide information on pathways to improve outcomes Provide additional support in form of consultation, data analysis, reporting and decision tools as needed Reward top performers with recognition, incentives, increased referrals, etc.

101 Strategies for success Put the patient first: patient welfare trumps clinician comfort. Show the business case: return on investment; rational allocation of resources, marketing a sales advantages. Create a clear mandate to measure outcomes and a date for implementation: “drop dead date”. Keep it simple; don’t be afraid to fix it later. Give recognition and support for early adapters and risk takers.

102 Formulas for failure Too complicated: Too many measures, too much time, to hard to explain. IT paralysis: Too much technology, too much complexity, too much dependence on expertise not under your control (outside vendors, IT staff). Design by committee: Too many cooks in the kitchen; too many people with too many agendas. Clinician referendum: Expectation that outcomes initiative is dependent upon clinician “acceptance”.

103 http://www.clinical-informatics.com jebbrown@clinical-informatics.com 1821 Meadowmoor Rd. Salt Lake City, UT 84117 Voice 801-541-9720

104 Suggested readings Ahn H, Wampold BE. Where oh where are the specific ingredients? A meta-analysis of component studies in counseling and psychotherapy. Journal of Counseling Psychology; 2001: 48, 251-257. Blatt SJ, Sanislow CA, Zuroff DC, Pilkonis PA. Characteristics of effective therapists: Further analyses of data from the National Institute of Mental Health Treatment of Depression Collaborative Research Program. Journal of Consulting and Clinical Psychology; 1996: 64, 1276-1284. Brown GS, Burlingame GM, Lambert MJ, et al. Pushing the quality envelope: A new outcomes management system. Psychiatric Services; 2001: 52 (7), 925-934. Brown GS, Herman R, Jones ER, Wu J. Improving substance abuse assessments in a managed care environment. Joint Commission Journal on Quality and Safety; 2004: 30(8), 448-454. Brown GS, Jones ER, Betts W, Wu J. Improving suicide risk assessment in a managed-mare environment. Crisis; 2003: 24(2), 49-55. Brown GS, Jones ER, Lambert MJ, Minami T. Identifying highly effective psychotherapists in a managed care environment. American Journal of Managed Care, 2005: 11(8):513-20. Brown GS, Jones ER. Implementation of a feedback system in a managed care environment: What are patients teaching us? Clinical Psychology/In Session: 2005: 61(2), 187-198. Burlingame GM, Jasper BW, Peterson G, et al. Administration and Scoring Manual for the YLSQ. Wilmington, DL, American Professional Credentialing Services; 2001. Crits-Christoph P, Mintz J. Implications of therapist effects for the design and analysis of comparative studies of psychotherapies. Journal of Consulting and Clinical Psychology, 1991; 59, 20-26. Crits-Christoph P., Baranackie K., Kurcias JS et al. Meta-analysis of therapist effects in psychotherapy outcome studies. Psychotherapy Research; 1991: 1, 81-91. Elkin I. A major dilemma in psychotherapy outcome research: Disentangling therapists from therapies. Clinical Psychology: Science and Practice; 1999: 6, 10-32.

105 Suggested readings (continued) Hannan C, Lambert MJ, Harmon C, Nielsen SL, Smart DW & Shimokawa K, Sutton SW. A lab test and algorithms for identifying clients at risk for treatment failure. Journal of Clinical Psychology/In Session: 2005: 61(2), 155-164. Harmon C, Hawkins, EJ, Lambert, MJ, Slade K & Whipple JL. Improving outcomes for poorly responding clients: the use of clinical support tools and feedback to clients. 61(2), 175-186. Huppert JD, Bufka LF, Barlow DH, Gorman JM, Shear MK, Woods SW. Therapists, therapist variables, and cognitive-behavioral therapy outcomes in a multicenter trial for panic disorder. Journal of Consulting and Clinical Psychology; 2001: 69, 747-755. Kim DM, Wampold BE, Bolt DM. Therapist effects and treatment effects in psychotherapy: Analysis of the National Institute of Mental Health Treatment of Depression Collaborative Research Program. Psychotherapy Research: 2006: 12(2), 161-172. Lambert MJ., Whipple J., Smart, DW et al (Vermeersch, D. A., Nielsen, S.L., & Hawkins, E. J.) The effects of providing therapists with feedback on patient progress during psychotherapy: Are outcomes enhanced? Psychotherapy Research; 2001: 11, 49-68. Lambert MJ, Harmon C, Slade K, Whipple JL & Hawkins EL. Providing feedback to psychotherapists on their patients’ progress: clinical results and practice suggestions. Journal of Clinical Psychology/In Session: 2005: 61(2), 165-174. Lambert MJ, Hatfield DR, Vermeersch DA., et al. Administration and scoring manual for the LSQ (Life Status Questionnaire). East Setauket, NY: American Professional Credentialing Services; 2001. Lambert MJ, Whipple JL, Hawkins EJ et al. Is it time for clinicians to routinely track patient outcome? A meta-analysis. Clinical Psychology: Science & Practice; 2003: 10, 288-301. Lambert MJ. Emerging methods for providing clinicians with timely feedback on treatment effectiveness. Journal of Clinical Psychology/In Session: 2005: 61(2), 141-144.

106 Suggested readings (continued) Luborsky L, Crits-Christoph P, McLellan T et al. Do therapists vary much in their success? Findings from four outcome studies. American Journal of Orthopsychiatry; 1986: 56, 501-512. Luborsky L, Rosenthal R, Diguer L et al. The dodo bird verdict is alive and well--mostly. Clinical Psychology: Science & Practice; 2002: 9(1) 2-12. Matsumoto K, Jones E, Brown, GS. Using clinical informatics to improve outcomes: A new approach to managing behavioral healthcare. Journal of Information Technology in Health Care; 2003: 1(2), 135-150 Okiishi J, Lambert MJ, Nielsen SL, Ogles BM. Waiting for supershrink: An empirical analysis of therapist effects. Clinical Psychology and Psychotherapy; 2003: 10, 361-373. Porter ME & Teisberg, EO. Redefining competition in health care. Harvard Business Review,2004:65-76. Shapiro DA, Shapiro, D. Meta-analysis of comparative therapy outcome studies: A replication and refinement. Journal of consulting and Clinical Psychology; 1982: 92, 581–604. Vermeersch DA, Lambert MJ, Burlingame GM. Outcome Questionnaire: Item sensitivity to change. Journal of Personality Assessment; 2002: 74, 242-261. Wampold BE, Brown GS. Estimating therapist variability: A naturalistic study of outcomes in private practice. Journal of Consulting and Clinical Psychology; 2005: 75(5) pp 914-923. Wampold BE, Mondin GW, Moody M et al. A meta-analysis of outcome studies comparing bona fide psychotherapies: Empirically, “all must have prizes.” Psychological Bulletin; 1997: 122, 203-2154.

107 About the presenter G.S. (Jeb) Brown is a licensed psychologist with a Ph.D. from Duke University. He served as the Executive Director of the Center for Family Development from 1982 to 19987. He then joined United Behavioral Systems (an United Health Care subsidiary) as the Executive Director for of Utah, a position he held for almost six years. In 1993 he accepted a position as the Corporate Clinical Director for Human Affairs International (HAI), at that time one of the largest managed behavioral healthcare companies in the country. In 1998 he left HAI to found the Center for Clinical Informatics, a consulting firm specializing in helping large organizations implement outcomes management systems. Client organizations include PacifiCare Behavioral Health/ United Behavioral Health, Department of Mental Health for the District of Columbia, Accountable Behavioral Health Care Alliance, Resources for Living and assorted treatment programs and centers throughout the world. Dr. Brown continues to work as a part time psychotherapist at behavioral health clinic in Salt Lake City, Utah. He does measure his outcomes.


Download ppt "Launching and Nurturing a Performance Management System G.S. (Jeb) Brown, Ph.D. Center for Clinical Informatics."

Similar presentations


Ads by Google