Presentation on theme: "Dashboard: Data Review & Input. Presentation Objectives Learning Objectives List the 5 key data components that contribute to Evidence Based treatment."— Presentation transcript:
Dashboard: Data Review & Input
Presentation Objectives Learning Objectives List the 5 key data components that contribute to Evidence Based treatment success (to include: FFT, MST (all adaptations), PCIT, TIP, TF-CBT and CPP) Explain how the monitoring of key data components support continuous and sustainability delivery of Evidenced Based treatment models - FFT, MST (all adaptations), PCIT, TIP, TF-CBT and CPP) Identify at least 1 outcome goal and 1 fidelity goal related to each of the Evidenced Based treatment models offered in the District
Training Dissemination Studies Effectiveness Studies Efficacy Studies Process Studies Implementation Supervision Sustainability Culture Economic Conditions SES Laws Immigration Built Environment NEIGHBORHOOD FRIENDS FAMILY Basic Research AREAS WITHIN EBPS DATA INFLUENCES
Why a Dashboard? A dashboard provides a quick and accurate assessment of the project and the components of the project while doing so in a comparative manner. The goal of the dashboard is to provide an understanding of how the Families First Project is performing at both the macro (project wide) and micro (model and provider) levels. To best do this, it was important to identify a finite number of data points that could be tracked across the project. – This allows for meaningful comparisons to be made and a consistent thought process can be used when examining various sections of the project.
What Does the Dashboard Do? Identifies and tracks key pieces of data Provides a well rounded picture of how the overall project, a model, or a provider (or provider/model combination) is performing – Decisions can be made on how to improve or strengthen performance based upon the results shown on the dashboard.
Who is the Dashboard Designed For? Providers Models Project Managers (EBA) Funders (DBH) All can use the dashboard in total or in part to identify strengths and weaknesses in the services being provided.
When is the Dashboard Updated? The data for each provider and model is to be submitted by the 10 th of each month. The data is then reviewed and submitted for entry to the dashboard by the 15 th of each month. The dashboard is published to the EBA website by the 25 th of each month
Where Does the Dashboard Information Come From? Each month the providers and models submit “trackers” with these key pieces of information Janine Robinson (EBA Program Specialist) then reviews the data to ensure accuracy before sending it on for entry into the dashboard
Example of a Tracker
How Many Levels are There in the Dashboard? Project as a Whole (Macro Level) Model Provider Provider/Model Combinations (Micro Level)
How Do the Levels of the Dashboard Tie Together? Each provider/model dashboard rolls up into a total provider dashboard Each provider/model dashboard also rolls up into a total model dashboard The model dashboards are rolled up into the cumulative dashboard that assesses the overall performance of the project.
Dashboard Data Points Staffing – # of staff that are serving cases vs. # of staff that are expected to serve cases Capacity - # of cases that the staff in place could be serving vs. the # of cases that could be seen when fully staffed Utilization - # of cases actually are being seen vs. the # of cases that could be seen by the staff in place Quality – adherence to model systemic guidelines Outcomes - # of successful discharges (as defined by the models) vs. the total # of discharges
Data Point Evaluation Each of the data points is evaluated over the most recent 3 months, so that a single month’s performance does not overly skew the evaluation. Stoplight grades are given for each category (%’s of a perfect 100% noted for each color): Black = Optimal (>94%) Green = Good (85%-94%) Yellow = Satisfactory (75%-84%) Red = Poor (<75%)
Data Point Evaluation cont’d These grades are updated each month and give a quick snapshot look into how the model or provider has done in these 3 months.
Data Point Comparative In addition to the stoplight evaluation, there is also a comparative bar graph that shows the performance in each of the 5 data points in 3 month groupings (most recent 3 months and prior 3 months). This bar graph is done on a scale of 0-4 and translates to the color coding of the stoplight evaluations (for the most recent 3 months only) as follows: Black = Optimal (>3.76) Green = Good ( ) Yellow = Satisfactory ( ) Red = Poor (<3.00)
Data Point Comparative cont’d This is a way to see if there has been any change in performance in the most recent 3 months from the previous 3 months.
Data Point Historical Tracking For each of the 5 data points, there are also line graphs that show the last 6 months of activity (as well as an average for the 6 months). This is helpful to identify whether there are any trends or single month anomalies.
Raw Data For each of the 5 data points, the data behind the graphs is displayed for the last 6 months of activity.
Case Breakdown The total number of active cases that are open at the end of each month is broken down to show which cases are new in the month (unduplicated) and which cases have been carried over from prior months (duplicated).
Links to Other Dashboards To move from one dashboard to another, there are links in the upper right hand corner of each dashboard that will bring the user directly to any of the other dashboards.
Contact Information Data Submission Questions – Janine Robinson EBA Program Specialist Dashboard Functionality & Interpretation – Bob Sayles Director of Information Management & Technology Dashboard Interpretation & Implementation – Leslie-Ann Byam Program Director