Presentation is loading. Please wait.

Presentation is loading. Please wait.

Benchmarking Outcomes Takuya Minami. Benchmarking In many fields, including business, policy, medicine, and sports, “benchmarking” is not a new concept.

Similar presentations


Presentation on theme: "Benchmarking Outcomes Takuya Minami. Benchmarking In many fields, including business, policy, medicine, and sports, “benchmarking” is not a new concept."— Presentation transcript:

1 Benchmarking Outcomes Takuya Minami

2 Benchmarking In many fields, including business, policy, medicine, and sports, “benchmarking” is not a new concept Quite simply, “benchmarking” involves: 1.Identifying best-practice criteria or goal(s) 2.Measuring outcomes to see whether or not it meets the criteria or goal(s) 3.Identify areas that do not meet criteria/goal(s) 4.Improve current practices to attain criteria/goal(s)(and loop back to 1) 2

3 Benchmarking Behavioral Health For whatever reason, use of benchmarking in behavioral health is relatively new In academia, first articles to show in 1998, improvements made in 2002 and in 2007 Focuses on assessing behavioral health outcome (i.e., do real-world practice measure up to clinical trials?) Practically, a system was already pioneered at Human Affairs International (HAI; 1995-1998) and then continued at PacifiCare Behavioral Health (PBH; 1998-2006) Focuses on assessing behavioral health process (e.g., are any clients “at risk” of poor outcome?) 3

4 Benchmarking Behavioral Health System that tracks client progress must have both the outcome and process foci Obviously, the system must be user-friendly for the clinicians so as to track progress of their clients In addition, criterion for what constitutes good outcomes (i.e., a benchmark to meet) needs to be incorporated Moreover, data must be easy to aggregate so that outcomes could be evaluated at the organizational level 4

5 Benchmarking Outcomes 5 Currently, the best benchmark is derived from a meta-analysis of adult depression clinical trials, and is approximately d = 0.80 Approximately 79% (shaded area) of the clients who receive treatment will do better than the average client who did not receive treatment

6 The effect size metric (Cohen’s d) used as the benchmark is the difference between pretest and posttest divided by the pretest standard deviation (times a correction for sample size) It is important to note that this effect size is different from the well-known effect size from Smith & Glass (1977), which also was coincidentally around d = 0.80, as this effect size is the difference between treatment and control Benchmarking Outcomes 6

7 Overall, compared to the clinical trials benchmark of d = 0.80, psychotherapy treatments reimbursed y HMOs is effective Study using PBH data showed comparable treatment effect as compared to this benchmark Without a benchmark solely based on real- world treatment, this is still the best criterion Thus, the question at hand is, how do we incorporate this benchmark into a clinical outcomes measurement/management system 7

8 Benchmarking Outcomes Adjustment for initial severity One caveat of the d = 0.80 benchmark is that clinical trials exclude clients who do not meet criteria for clinically significant distress In the real world, clients are not denied treatment based on lack of initial severity Higher severity is known to lead to larger effect size (statistical artifact known as regression to the mean) Obviously, real-world therapists will be different with regards to their clients’ average initial severity 8

9 Benchmarking Outcomes As it currently is, the same magnitude of effect size will mean different things If Clinician A’s caseload is very severe at intake on average, then d = 0.80 is rather poor; however, if clients are on average very low severity, then d = 0.80 is exceptionally good Therefore, the effect sizes need to be standardized so that it is straightforward Standardization of effect sizes also allows for aggregating them by organization 9

10 Benchmarking Outcomes Thus, observed (raw) effect sizes are decomposed and then reconstructed so that every therapist is on the same scale First, raw effect size is residualized taking into consideration the clients’ initial severity Raw effect size: how large the actual pre-post difference is, without taking into consideration anything about the client (hard to interpret) Residualized effect size: how large the pre-post difference is, given the client’s initial severity of psychological distress (interpretable but not straightforward enough for practical use) 10

11 Benchmarking Outcomes Severity-adjusted effect size The residual is reconstructed into absolute magnitude by standardizing it so that the overall observed mean is the same Here, Clinician A’s d = 0.80 means the same regardless of the average severity at intake This also means that Clinican A’s d = 0.80 is the same with Clinician B’s d = 0.80 Therefore, this allows for aggregation of effect sizes based on any level (e.g., therapist, organization) 11

12 References Becker, B. J. (1988). Synthesizing standardized mean-change measures. British Journal of Mathematical and Statistical Psychology, 41, 257-278. Brown, G. S., Burlingame, G. M., Lambert, M. J., Jones, E., & Vaccaro, J. (2001). Pushing the quality envelope: A new outcomes management system. Psychiatric Services, 52, 925-934. Brown, J., Dries, S., & Nace, D. K. (1999). What really makes a difference in psychotherapy outcome? Why does managed care want to know? In M. A. Hubble, B. L. Duncan, & S. D. Miller (Eds.), The heart and soul of change (pp. 389-406). Washington, DC: American Psychological Association. Brown, G. S., Fraser, J. B., & Bendoraitis, T. M. (1995). Transforming the future: The coming impact of CIS (clinical information systems). Behavioral Health Management, 15, 8-11. Brown, G. S., & Jones, E. R. (2005). Implementation of a feedback system in a managed care environment: What are patients teaching us? Journal of Clinical Psychology/In Session, 61, 187-198. Brown, G. S. (J.), Lambert, M. J., Jones, E. R., & Minami, T. (2005). Identifying highly effective psychotherapists in a managed care environment. American Journal of Managed Care, 11, 513-520. Hedges, L. V., & Olkin, I. (1985). Statistical methods for meta-analysis. San Diego, CA: Academic Press. Minami, T., Serlin, R. C., Wampold, B. E., Kircher, J. C., & Brown, G. S. (J.) (2008). Using clinical trials to benchmark effects produced in clinical practice. Quality & Quantity, 42, 513-525. Minami, T., & Wampold, B. E. (2008). Adult psychotherapy in the real world. In W. B. Walsh (Ed.), Biennial review of counseling psychology (vol. 1, pp. 27-45). New York: Routledge. Minami, T., Wampold, B. E., Serlin, R. C., Hamilton, E. G., Brown, G. S. (J.), & Kircher, J. C. (2008). Benchmarking the effectiveness of psychotherapy treatment for adult depression in a managed care environment: A preliminary study. Journal of Consulting and Clinical Psychology, 76, 116-124. Minami, T., Wampold, B. E., Serlin, R. C., Kircher, J. C., & Brown, G. S. (J.) (2007). Benchmarks for psychotherapy efficacy in adult major depression. Journal of Consulting and Clinical Psychology, 75, 232-243. Morris, S. B. (2000). Distribution of the standardized mean change effect sizes for meta-analysis on repeated measures. British Journal of Mathematical and Statistical Psychology, 53, 17-29. Wade, W. A., Treat, T. A., Stuart, G. L. (1998). Transporting an empirically supported treatment for panic disorder to a service clinic setting: A benchmarking strategy. Journal of Consulting and Clinical Psychology, 66, 231-239. Wampold, B. E., & Brown, G. S. (2005). Estimating variability in outcomes attributable to therapists: A naturalistic study of outcomes in managed care. Journal of Consulting and Clinical Psychology, 73, 914-923. Weersing, V. R., & Weisz, J. R. (2002). Community clinic treatment of depressed youth: Benchmarking usual care against CBT clinical trials. Journal of Consulting and Clinical Psychology, 70, 299-310. 12


Download ppt "Benchmarking Outcomes Takuya Minami. Benchmarking In many fields, including business, policy, medicine, and sports, “benchmarking” is not a new concept."

Similar presentations


Ads by Google