Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Evaluating Health Information Technology: Putting Theory Into Practice Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information.

Similar presentations


Presentation on theme: "1 Evaluating Health Information Technology: Putting Theory Into Practice Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information."— Presentation transcript:

1 1 Evaluating Health Information Technology: Putting Theory Into Practice Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information Systems David F. Lobach, MD, PhD, MS David F. Lobach, MD, PhD, MS Division of Clinical Informatics Department of Community and Family Medicine Duke University Medical Center, Durham, North Carolina AHRQ’s National Resource Center for Health Information Technology Annual Meeting June 2005

2 2 Outline Overview of Evaluating HIT Overview of Evaluating HIT Why evaluate? Why evaluate? General Approach to Evaluation General Approach to Evaluation Choosing Evaluation Measures Choosing Evaluation Measures Study Design Types Study Design Types Analytical issues in HIT evaluations Analytical issues in HIT evaluations Evaluation in the ‘real world’ Evaluation in the ‘real world’ Duke University Medical Center Duke University Medical Center

3 3 Why Measure Impact of HIT? Impact of HIT often hard to predict Impact of HIT often hard to predict Many “slam dunks” go awry Many “slam dunks” go awry You can’t manage/improve what isn’t measured You can’t manage/improve what isn’t measured Understand how to clear barriers to effective implementation Understand how to clear barriers to effective implementation Understand what works and what doesn’t Understand what works and what doesn’t Invent the wheel only once Invent the wheel only once Justify enormous investments Justify enormous investments Return on investment Return on investment Allow other institutions to make tradeoffs intelligently Allow other institutions to make tradeoffs intelligently Use results to win over late adopters Use results to win over late adopters

4 4 General Approach to Evaluating HIT Understand your intervention Understand your intervention Formulate questions to answer Formulate questions to answer Select and define measures Select and define measures Pick the study design Pick the study design Data analysis Data analysis

5 5 Getting Started: Get to know your intervention What problem(s) is it trying to solve? What problem(s) is it trying to solve? Think about intermediate processes Think about intermediate processes Identify potential barriers to successful implementation: Identify potential barriers to successful implementation: Managerial barriers Managerial barriers End-user behavioral barriers End-user behavioral barriers Understand how your peers around the country are addressing (or not) the same issues. Understand how your peers around the country are addressing (or not) the same issues.

6 6 Formulating Questions Likely questions: Likely questions: Does the HIT work? Does the HIT work? What would have made it work better? What would have made it work better? What would the next set of designers/implementors like to know? What would the next set of designers/implementors like to know? Has this question been fully answered before? Has this question been fully answered before? Don’t reinvent the wheel! (not a big concern) Don’t reinvent the wheel! (not a big concern) What impact would the answer have? What impact would the answer have? Peers Peers Policy makers Policy makers

7 7 Array of Measures Quality and Safety Quality and Safety Clinical Outcomes Clinical Outcomes Clinical Processes Clinical Processes Knowledge Knowledge Patient Patient Provider Provider Satisfaction & Attitudes Satisfaction & Attitudes Patient Patient Provider Provider Resource utilization Costs and charges LOS Employee time/workflow Lessons learned

8 8 Choosing Study Measures Clinical vs Process Measures Clinical vs Process Measures Clinical outcomes (e.g. mortality) desirable Clinical outcomes (e.g. mortality) desirable Justifiable to measure process outcomes (e.g. door to abx time) if relationship between outcome and process already demonstrated Justifiable to measure process outcomes (e.g. door to abx time) if relationship between outcome and process already demonstrated Will outcomes be impacted by the intervention? Will outcomes be impacted by the intervention? Will impact on outcomes be detectable during the study period? Will impact on outcomes be detectable during the study period? ? Rare events, e.g. adverse outcomes ? Rare events, e.g. adverse outcomes ? Colon cancer screening ? Colon cancer screening What resources do you have? What resources do you have? Don’t bit off more than what you can chew. Don’t bit off more than what you can chew.

9 9 Selecting Study Types Commonly used study types: Commonly used study types: Optimal design: Randomized Controlled Trials Optimal design: Randomized Controlled Trials Factorial Design Factorial Design Before-and-after time series Trials Before-and-after time series Trials Main study design issues: Main study design issues: Secular Trend: Can a simultaneous control group be established? Secular Trend: Can a simultaneous control group be established? Confounding: Can you randomly assign individuals to study groups? Confounding: Can you randomly assign individuals to study groups? Study design often influenced by implementation plan Study design often influenced by implementation plan Need to respect operational needs, but often there is room for creative designs Need to respect operational needs, but often there is room for creative designs

10 10 Randomization Nuts and Bolts Justifiable to have a control arm as long as benefit not already demonstrated (usual care) Justifiable to have a control arm as long as benefit not already demonstrated (usual care) Want to choose a truly random variable Want to choose a truly random variable Not day of the week Not day of the week Consideration: Stratified randomization Consideration: Stratified randomization Ensures that intervention and control group are similar on important characteristics (e.g. baseline computer literacy) Ensures that intervention and control group are similar on important characteristics (e.g. baseline computer literacy) Strongest possible intervention Strongest possible intervention

11 11 Randomization Unit: How to Decide? Small units (patients) vs. Large units (practices wards) Small units (patients) vs. Large units (practices wards) Contamination across randomization units Contamination across randomization units If risk of contamination is significant, consider larger units If risk of contamination is significant, consider larger units Effect contamination-can underestimate impact Effect contamination-can underestimate impact However, if you see a difference, impact is present However, if you see a difference, impact is present Randomization by patient generally undesirable Randomization by patient generally undesirable Contamination Contamination Ethical concern Ethical concern

12 12 Randomization Schemes: Simple RCT Burn-in period Burn-in period Give target population time to get used to new intervention Give target population time to get used to new intervention Data not used in final analysis Data not used in final analysis XX Clinics Baseline Period Baseline Data Collection Data Collection for RCT No Intervention Intervention Period 3 month burn- in period Intervention Deployed Intervention arm Control arm Control arm gets intervention Post- Intervention Period

13 13 Randomization schemes: Factorial Design May be used to concurrently evaluate more than one intervention: May be used to concurrently evaluate more than one intervention: Assess interventions independently and in combination Assess interventions independently and in combination Loss of statistical power Loss of statistical power Usually not practical for more than 2 interventions Usually not practical for more than 2 interventions Control (no interventions) A B A+B

14 14 Randomization Schemes: Staggered Deployment Advantages of staggering Advantages of staggering Easier for user education and training Easier for user education and training Can fix IT problems up front Can fix IT problems up front Need to account for secular trend and baseline differences Need to account for secular trend and baseline differences Time variable in regression analysis Time variable in regression analysis Control for practice characteristics Control for practice characteristics Intervention Group Control Group Intervention Group Control Group Intervention Group Control Group Intervention Group Control Group

15 15 Inherent Limitations of RCTs in Informatics Blinding is seldom possible Blinding is seldom possible Effect on documentation vs. clinical action Effect on documentation vs. clinical action People always question generalizability People always question generalizability Success is highly implementation independent Success is highly implementation independent Efficacy-effectiveness gap: ‘Invented here’ effect Efficacy-effectiveness gap: ‘Invented here’ effect

16 16 Mitigating the Limitations of Before-and-After Study Designs Before-and-after trial common in informatics Before-and-after trial common in informatics Concurrent randomization is hard Concurrent randomization is hard Don’t lose the opportunity to collect baseline data! Don’t lose the opportunity to collect baseline data! Leave the time gap between before and after trends relatively short Leave the time gap between before and after trends relatively short Look for secular trend in statistical analysis and adjust for it if present Look for secular trend in statistical analysis and adjust for it if present

17 17 Common Pitfalls with Data Collection Measures you define and collect on your own Measures you define and collect on your own Pilot data collection and refine definition early Pilot data collection and refine definition early Ask yourself early whether data your collect measure what you intended to measure. Ask yourself early whether data your collect measure what you intended to measure. Measures others defined but you collect on your own Measures others defined but you collect on your own Do you need to adapt other people’s instruments? Do you need to adapt other people’s instruments? Measures others define and collect for you Measures others define and collect for you Understand nuisances and limitations, particular with administrative data. Understand nuisances and limitations, particular with administrative data.

18 18 Electronic Data Abstraction: There’s no free lunch! Convenient and time-saving, but… Convenient and time-saving, but… Some chart review (selected) to get information not available electronically Some chart review (selected) to get information not available electronically Get ready for surprises Get ready for surprises Documentation effect of EMRs Documentation effect of EMRs

19 19 Data Collection Issue: Baseline Differences Randomization schemes often lead to imbalance between intervention and control arms: Randomization schemes often lead to imbalance between intervention and control arms: Need to collect baseline data and adjust for baseline differences Need to collect baseline data and adjust for baseline differences Interaction term ( Time * Allocation Arm) gives effect for intervention in regression analysis Interaction term ( Time * Allocation Arm) gives effect for intervention in regression analysis

20 20 Data Collection Issue: Completeness of Followup The higher the better: The higher the better: Over 90% Over 90% 80-90% 80-90% Less than 80% Less than 80% Intention to treat analysis Intention to treat analysis In an RCT, should analyze outcomes according to the original randomization assignment In an RCT, should analyze outcomes according to the original randomization assignment

21 21 A Common Analytical Issue The Clustering Effect Occurs when your observations are not independent: Occurs when your observations are not independent: Example: Each physician treats multiple patients: Example: Each physician treats multiple patients: May need to increase sample size to account for loss of power. May need to increase sample size to account for loss of power. Intervention Group Control Group Physicians Patient -> Outcome assessed

22 22 Looking at Usage Data Great way to tell how well the intervention is going Great way to tell how well the intervention is going Target your trouble-shooting efforts Target your trouble-shooting efforts In terms of evaluating HIT: In terms of evaluating HIT: Correlate usage to implementation/training strategy Correlate usage to implementation/training strategy Correlate usage to stakeholder characteristics Correlate usage to stakeholder characteristics Correlate usage to improved outcome Correlate usage to improved outcome

23 23 Studies on Workflow and Usability How to make observations? How to make observations? Direct observations Direct observations Stimulated observations Stimulated observations Random paging method Random paging method Subjects must be motivated and cooperative Subjects must be motivated and cooperative Usability Lab Usability Lab What to look for? What to look for? Time to accomplish specific tasks: Time to accomplish specific tasks: Need to pre-classify activities Need to pre-classify activities Handheld/Tablet PC tools may be very helpful Handheld/Tablet PC tools may be very helpful Workflow analysis Workflow analysis Asking users to ‘think aloud’ Asking users to ‘think aloud’ Unintended consequences of HIT Unintended consequences of HIT

24 24 Cost Benefit Analysis Do the benefits of the technology justify the costs? Do the benefits of the technology justify the costs? Monetary benefits – Monetary costs Monetary benefits – Monetary costs Important in the policy realm Important in the policy realm Need to specify perspective Need to specify perspective Organizational Organizational Societal Societal Cost analysis more straight forward Cost analysis more straight forward Prospective data collection preferred Prospective data collection preferred Discounting: a dollar spent today worth more than a dollar 10 years from now Discounting: a dollar spent today worth more than a dollar 10 years from now Benefits analysis more controversial Benefits analysis more controversial Cost of illness averted: medical costs, productivity for patient Cost of illness averted: medical costs, productivity for patient What is the cost of suffering due to preventable adverse events? What is the cost of suffering due to preventable adverse events? What is the cost of a life? What is the cost of a life?

25 25 Using Surveys – Stay Tuned! Survey of user believes, attitude and behaviors Survey of user believes, attitude and behaviors Response rate – responder bias: Aim for response rate > 50-60% Response rate – responder bias: Aim for response rate > 50-60% Keep the survey concise Keep the survey concise Pilot survey for readability and clarity Pilot survey for readability and clarity Need formal validation if you want plan to develop a scale/summary score Need formal validation if you want plan to develop a scale/summary score

26 26 Qualitative Methodologies – Don’t touch that dial! Major techniques Major techniques Direct observations Direct observations Semi-structured interviews Semi-structured interviews Focus groups Focus groups Adds richness to the evaluation Adds richness to the evaluation Explains successes and failures. Generate Lessons learned Explains successes and failures. Generate Lessons learned Captures the unexpected Captures the unexpected Great for forming hypotheses Great for forming hypotheses People love to hear stories People love to hear stories Data analysis Data analysis Goal is to make sense of your observations Goal is to make sense of your observations Iterative & interactive Iterative & interactive

27 27 Concluding Remarks Don’t bite off more than what you can chew Don’t bite off more than what you can chew Pick a few study outcomes and study them well. Pick a few study outcomes and study them well. It’s a practical world It’s a practical world Balancing operational and research needs is always a challenge. Balancing operational and research needs is always a challenge. Life (data collection) is like a box of chocolates… Life (data collection) is like a box of chocolates… You don’t know what you’re going to get until you look, so look early! You don’t know what you’re going to get until you look, so look early!

28 28 Thank you Eric Poon, MD MPH Eric Poon, MD MPH Email: epoon@partners.org Email: epoon@partners.orgepoon@partners.org Acknowledgements Acknowledgements Davis Bu, MD MA Davis Bu, MD MA CITL, Partners Healthcare CITL, Partners Healthcare David Bates, MD MSc David Bates, MD MSc Chief, Div of General Medicine, Brigham and Women’s Hospital Chief, Div of General Medicine, Brigham and Women’s Hospital


Download ppt "1 Evaluating Health Information Technology: Putting Theory Into Practice Eric Poon, MD MPH Clinical Informatics Research and Development, Partners Information."

Similar presentations


Ads by Google