Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Claims Editing and Pre-pay Fraud and Abuse Detection and Avoidance Tom McGraw.

Similar presentations


Presentation on theme: "1 Claims Editing and Pre-pay Fraud and Abuse Detection and Avoidance Tom McGraw."— Presentation transcript:

1 1 Claims Editing and Pre-pay Fraud and Abuse Detection and Avoidance Tom McGraw

2 2 Agenda Claims Editing and Pre-pay Fraud and Abuse Detection and Avoidance Defined Detection Methods Comparison of Pre-pay and Post-pay Approaches Results Questions

3 3 Claims Editing and Pre-pay Fraud and Abuse Detection and Avoidance Defined

4 4 Use of Terms “Avoidance” and “Fraud” in this Presentation “Avoidance” means the actual savings from direct claims denied or reduced Net of amounts paid on the original claim Net of amounts paid on re-filed claims “Avoidance” does not mean additional savings from the provider from change in behavior, or change in behavior of other providers “Fraud” is used to mean any claim that is filed by the provider or provider billing agent improperly and really should be “fraud, waste, abuse or other improperly filed claim” Except where specified, I am not referring to the legal definition of “fraud” which includes “intent”

5 5 Claims Editing There are several claims editors on the market and Ingenix provides the Ingenix Claims Editing System (iCES) Claims editors are focused on claims that are inherently incorrect or that are incorrect given other claims They are based on industry standards, coding requirements or payer specific requirements Generally they auto-deny claims pre-payment Some claims editors also have “rules” that are not auto- deny

6 6 Pre-payment Processes That I Won’t Be Talking About Prior authorization of services Biometric or other technology that validates that the proper patient and provider were present at the point-of- service 100% pre-pay review of claims over a certain dollar threshold

7 7 What is Pre-pay Fraud and Abuse Detection and Avoidance? Identification of “suspect” claims or claim lines One or more of various models rules, or prepayment flags have identified the claim as likely to be incorrectly coded, not performed, or not performed as coded Stopping those claims for human review Almost always requires stopping (suspend or “pend”) a claim and requesting a medical record In some claims processing systems/approaches, claims are denied when they are stopped and the medical record is requested Performing an in-depth investigation on the claim

8 8 Processing—Standard Implementation

9 9 Detection Methods

10 10 The Challenge of Improper Claim Detection This space represents the universe of claims Manual clinical review is impossible for entire space Goal: Stop as many reds (improper) for review as possible while keeping the number of blues (proper) identified to a minimum

11 11 Predictive Models & Analytical Targets Codes that can be used to bypass conventional claims edits Provider’s historical prevalence of up- coding Hours of work Changes in provider behavior particularly involving increasing of claims filed such as: Likelihood that certain claims should have been grouped Scores based on multiple factors Identification of “Unlikely” Claims—Multiple Methods Dimensional Modeling Anomalies Flag Claim as High Risk for “Overpayment” Unlikely or infrequent relationships Between diagnosis and procedures within a claim Between procedures from different claims for the same patient Peer Comparison Approach Outlier within specialty/region for performing high cost procedures based on synthetic (data-driven) specialty groupings Outlier for ordering certain tests or treatment These are some examples of the issues identified in pre-pay analytics

12 12 Observable Patterns INFERENC E Provider Flags – a list of known providers with issues is compiled and all or a subset of claims are stopped for review Aberrant Billing Pattern (ABP) Algorithms – clinical expertise crystallized into coding logic, patterns are identified at the claim level Challenger Analytics – outlier analysis & soft rules create dynamic provider flagging Predictive Model – detecting more advanced improper billing patterns using interactions among many variables Pre-Pay Fraud and Abuse Detection Methods

13 13 Claim Scoring The output of traditional rules or flags is binary: either a claim is flagged or it is allowed With the Predictive Model, the output is in the form of a continuous score The scores range from 1 to 1,000 - with higher scores indicating the larger deviation from typical behavior Once each claim line is scored, the final score for the claims is assigned as the maximum of the line scores The purpose of the score is to rank-order the claims in order of descending suspicion of fraud and abuse A score threshold is set to stop only those claims where Predictive Model score exceeds the threshold to ensure the most anomalous claims are stopped for review

14 14 Predictive Model Multiple anomaly factors used to identify suspect claims Uses a weighted approach and a deviation from expected mean approach Continually updated by payer experience Most core variable/equations unchanged Unbundling different based on Medicare rules “Peer” Grouping is Critical Does not use declared specialty Data-driven peer groups determined through advanced analytical techniques and novel use of data Start by looking for approximately 300 peer groups Work down to 100 to 200 groups to ensure each has sufficient size Has been and remains a core component of P2.0

15 15 Determining Which Claims to Flag The threshold determines which claims scored by the Predictive Model ultimately get flagged for review The threshold is composed of the following parameters: Predictive Model Score Claim Charged Amount An analysis of sample data runs via the Predictive Model is performed to determine the initial threshold setting Striking balance between maximizing potential savings and minimizing false positives This analysis is presented to the client for review and approval Ingenix reviews and recommends threshold changes to clients on a regular basis

16 16 Provider Filters Provider filters can be set up to ensure that no claims for a given provider are flagged by P2.0 Provider filters can based on either TIN or NPI, and are created to avoid flagging claims for providers that over time have proven to have a high false-positive rate These filters are set as time limited to ensure the providers are reviewed on a regular basis

17 17 Detection Methods Processing and Feedback Feedback improves models to the left which become more accurate over time for each client Over time, improvements to models to the left reduce the measured “accuracy” of models to the right Feedback for model improvement is also received from reviewers of claims and medical records

18 18 Staffing

19 19 Staffing Model development and maintenance—over 70 staff Advanced statistical modelers Data analysts Software developers Clinicians Coding and billing experts Payment policy experts Claims and medical record review—100s of staff Clinicians Coders Investigators

20 20 Comparison of Pre-pay and Post-pay Approaches

21 21 Comparison of Approaches Pre-pay Providers complain less when their money is not paid than when their money is taken back Claim specific review Fast turn-around needed because payment of some correct claims is being held up Claim-by-claim review limits referral to law enforcement for suspected criminal fraud Opportunity to stop payments to providers that would never pay back improperly paid amounts Post-pay Claim specific or provider reviews Provider review can lead to increased recoveries through extrapolation More referrals to law enforcement for suspected criminal fraud when providers are reviewed Needed for identifying certain activities such as network fraud and improper billing of low dollar claims (E&M up-coding) Feedback to pre-pay process Both are components of an effective, comprehensive Program Integrity program

22 22 Results

23 23 Overall Ingenix does pre-pay and post-pay fraud and abuse detection, avoidance and collection work for governments, commercial plans and over 10 government-focused health plans Ingenix saved clients approximately $500,000,000 from pre-pay fraud and abuse detection and avoidance services in 2010 This is the direct savings numbers from claims stopped and reduced or denied that would have otherwise been improperly paid

24 24 Claims Savings: 1.5% ** Request Medical Records Receive Claims and Apply Ingenix Predictive Analytics & Modeling to Score Claim Pend Suspect Claims and Receive Medical Records Deny Based on Medical Records Review Review Provider Appeals Accepted Appeals Total Net Pre-Pay Denials: 1,206 Claims * Per 1,000,000 Claims 1,709 Claims Pended and Medical Records Requested (0.17% of Total Claim Volume) 854 Records Received 401 Denials Based On Records Review + 855 Denials for Records Not Received Appeals = 167 Overturned Denials (Based on Appeal) = 50 *401+855-50=1,206 **Savings of Professional Claims Dollars Health Plan Example

25 25 Questions? Tom McGraw thomas.mcgraw@ingenix.com (804) 357-7739 www.ingenix.com


Download ppt "1 Claims Editing and Pre-pay Fraud and Abuse Detection and Avoidance Tom McGraw."

Similar presentations


Ads by Google