Reserve Variability – Session II: Who Is Doing What? Mark R. Shapland, FCAS, ASA, MAAA Casualty Actuarial Society Spring Meeting San Juan, Puerto Rico.

Slides:



Advertisements
Similar presentations
Design of Experiments Lecture I
Advertisements

Brief introduction on Logistic Regression
©Towers Perrin Emmanuel Bardis, FCAS, MAAA Cane Fall 2005 meeting Stochastic Reserving and Reserves Ranges Fall 2005 This document was designed for discussion.
Materials for Lecture 11 Chapters 3 and 6 Chapter 16 Section 4.0 and 5.0 Lecture 11 Pseudo Random LHC.xls Lecture 11 Validation Tests.xls Next 4 slides.
Prediction, Goodness-of-Fit, and Modeling Issues ECON 6002 Econometrics Memorial University of Newfoundland Adapted from Vera Tabakova’s notes.
Model Assessment, Selection and Averaging
Model assessment and cross-validation - overview
Reserve Risk Within ERM Presented by Roger M. Hayne, FCAS, MAAA CLRS, San Diego, CA September 10-11, 2007.
An Introduction to Stochastic Reserve Analysis Gerald Kirschner, FCAS, MAAA Deloitte Consulting Casualty Loss Reserve Seminar September 2004.
Validation and Monitoring Measures of Accuracy Combining Forecasts Managing the Forecasting Process Monitoring & Control.
Resampling techniques Why resampling? Jacknife Cross-validation Bootstrap Examples of application of bootstrap.
P&C Reserve Basic HUIYU ZHANG, Principal Actuary, Goouon Summer 2008, China.
Regression Analysis. Unscheduled Maintenance Issue: l 36 flight squadrons l Each experiences unscheduled maintenance actions (UMAs) l UMAs costs $1000.
BA 555 Practical Business Analysis
Reserve Variability Modeling: Correlation 2007 Casualty Loss Reserve Seminar San Diego, California September 10-11, 2007 Mark R. Shapland, FCAS, ASA, MAAA.
Lecture 11 Multivariate Regression A Case Study. Other topics: Multicollinearity  Assuming that all the regression assumptions hold how good are our.
1 The Assumptions. 2 Fundamental Concepts of Statistics Measurement - any result from any procedure that assigns a value to an observable phenomenon.
Severity Distributions for GLMs: Gamma or Lognormal? Presented by Luyang Fu, Grange Mutual Richard Moncher, Bristol West 2004 CAS Spring Meeting Colorado.
Ensemble Learning (2), Tree and Forest
September 11, 2006 How Valid Are Your Assumptions? A Basic Introduction to Testing the Assumptions of Loss Reserve Variability Models Casualty Loss Reserve.
Statistical Methods For Engineers ChE 477 (UO Lab) Larry Baxter & Stan Harding Brigham Young University.
AM Recitation 2/10/11.
Marketing Research Aaker, Kumar, Day and Leone Tenth Edition
Overview G. Jogesh Babu. Probability theory Probability is all about flip of a coin Conditional probability & Bayes theorem (Bayesian analysis) Expectation,
Loss Reserve Estimates: A Statistical Approach for Determining “Reasonableness” Mark R. Shapland, FCAS, ASA, MAAA EPIC Actuaries, LLC Casualty Loss Reserve.
Integrating Reserve Risk Models into Economic Capital Models Stuart White, Corporate Actuary Casualty Loss Reserve Seminar, Washington D.C September.
Variable selection and model building Part II. Statement of situation A common situation is that there is a large set of candidate predictor variables.
Bootstrapping Identify some of the forces behind the move to quantify reserve variability. Review current regulatory requirements regarding reserves and.
1 1 Slide © 2007 Thomson South-Western. All Rights Reserved Chapter 13 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
University of Ottawa - Bio 4118 – Applied Biostatistics © Antoine Morin and Scott Findlay 08/10/ :23 PM 1 Some basic statistical concepts, statistics.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
Two Approaches to Calculating Correlated Reserve Indications Across Multiple Lines of Business Gerald Kirschner Classic Solutions Casualty Loss Reserve.
MULTIPLE TRIANGLE MODELLING ( or MPTF ) APPLICATIONS MULTIPLE LINES OF BUSINESS- DIVERSIFICATION? MULTIPLE SEGMENTS –MEDICAL VERSUS INDEMNITY –SAME LINE,
VI. Evaluate Model Fit Basic questions that modelers must address are: How well does the model fit the data? Do changes to a model, such as reparameterization,
© 2007 Towers Perrin June 17, 2008 Loss Reserving: Performance Testing and the Control Cycle Casualty Actuarial Society Pierre Laurin.
Estimating the Predictive Distribution for Loss Reserve Models Glenn Meyers Casualty Loss Reserve Seminar September 12, 2006.
6. Evaluation of measuring tools: validity Psychometrics. 2012/13. Group A (English)
Modeling Reserve Variability November 13, 2007 Mark R. Shapland, FCAS, ASA, MAAA Louise A. Francis, FCAS, MAAA.
© Milliman, Inc All Rights Reserved. Estimating Reserve Variability: When Numbers Get Serious! Mark R. Shapland, FCAS, ASA, MAAA 2007 Casualty Loss.
The Analysis and Estimation of Loss & ALAE Variability Section 5. Compare, Contrast and Discuss Results Dr Julie A Sims Casualty Loss Reserve Seminar Boston,
Prediction, Goodness-of-Fit, and Modeling Issues Prepared by Vera Tabakova, East Carolina University.
©2015 : OneBeacon Insurance Group LLC | 1 SUSAN WITCRAFT Building an Economic Capital Model
Ranges of Reasonable Estimates Charles L. McClenahan, FCAS, MAAA Iowa Actuaries Club, February 9, 2004.
Estimation and Application of Ranges of Reasonable Estimates Charles L. McClenahan, FCAS, MAAA 2003 Casualty Loss Reserve Seminar.
STATISTICS AND OPTIMIZATION Dr. Asawer A. Alwasiti.
Case Selection and Resampling Lucila Ohno-Machado HST951.
Machine Learning 5. Parametric Methods.
Roger B. Hammer Assistant Professor Department of Sociology Oregon State University Conducting Social Research Specification: Choosing the Independent.
Part 4 – Methods and Models. FileRef Guy Carpenter Methods and Models Method is an algorithm – a series of steps to follow  Chain ladder, BF, Cape Cod,
Tutorial I: Missing Value Analysis
A Stochastic Framework for Incremental Average Reserve Models Presented by Roger M. Hayne, PhD., FCAS, MAAA Casualty Loss Reserve Seminar September.
Session C7: Dynamic Risk Modeling Loss Simulation Model Working Party Goals & Progress Report Mark R. Shapland, FCAS, ASA, MAAA Consulting Actuary Casualty.
What’s the Point (Estimate)? Casualty Loss Reserve Seminar September 12-13, 2005 Roger M. Hayne, FCAS, MAAA.
Measuring Loss Reserve Uncertainty William H. Panning EVP, Willis Re Casualty Actuarial Society Annual Meeting, November Hachemeister Award Presentation.
Reserving for Medical Professional Liability Casualty Loss Reserve Seminar September 10-11, 2001 New Orleans, Louisiana Rajesh Sahasrabuddhe, FCAS, MAAA.
From “Reasonable Reserve Range” to “Carried Reserve” – What do you Book? 2007 CAS Annual Meeting Chicago, Illinois November 11-14, 2007 Mark R. Shapland,
September 11, 2001 Thomas L. Ghezzi, FCAS, MAAA Casualty Loss Reserve Seminar Call Paper Program Loss Reserving without Loss Development Patterns - Beyond.
Modelling Multiple Lines of Business: Detecting and using correlations in reserve forecasting. Presenter: Dr David Odell Insureware, Australia.
Tree and Forest Classification and Regression Tree Bagging of trees Boosting trees Random Forest.
Remember the equation of a line: Basic Linear Regression As scientists, we find it an irresistible temptation to put a straight line though something that.
Model Comparison. Assessing alternative models We don’t ask “Is the model right or wrong?” We ask “Do the data support a model more than a competing model?”
Canadian Bioinformatics Workshops
Estimating standard error using bootstrap
The simple linear regression model and parameter estimation
Evaluation of measuring tools: validity
Materials for Lecture 18 Chapters 3 and 6
Chapter 10 Verification and Validation of Simulation Models
Reserve Variability: From Methods to Models
Statistical Methods For Engineers
Regression and Correlation of Data
Presentation transcript:

Reserve Variability – Session II: Who Is Doing What? Mark R. Shapland, FCAS, ASA, MAAA Casualty Actuarial Society Spring Meeting San Juan, Puerto Rico May 7-10, 2006

Methods vs. Models Types of Models Sources of Uncertainty Model Evaluation and Selection Overview

A Method is an algorithm or recipe – a series of steps that are followed to give an estimate of future payments. The well known chain ladder (CL) and Bornhuetter-Ferguson (BF) methods are examples Methods vs. Models

A Model specifies statistical assumptions about the loss process, usually leaving some parameters to be estimated. Then estimating the parameters gives an estimate of the ultimate losses and some statistical properties of that estimate. Methods vs. Models

Types of Models Individual Claim Models Triangle Based Models vs. Conditional ModelsUnconditional Models vs. Single Triangle ModelsMultiple Triangle Models vs. Parametric ModelsNon-Parametric Models vs. Diagonal TermNo Diagonal Term vs. Fixed ParametersVariable Parameters vs.

Process Risk – the randomness of future outcomes given a known distribution of possible outcomes. Parameter Risk – the potential error in the estimated parameters used to describe the distribution of possible outcomes, assuming the process generating the outcomes is known. Model Risk – the chance that the model (“process”) used to estimate the distribution of possible outcomes is incorrect or incomplete. Sources of Uncertainty

Model Selection and Evaluation Actuaries Have Built Many Sophisticated Models Based on Collective Risk Theory All Models Make Simplifying Assumptions How do we Evaluate Them?

Model Selection and Evaluation Actuaries Have Built Many Sophisticated Models Based on Collective Risk Theory All Models Make Simplifying Assumptions How do we Evaluate Them?

How Do We “Evaluate”? $70M $140M

How Do We “Evaluate”? $70M $140M

Liability Estimates Probability How Do We “Evaluate”?

Liability Estimates Probability How Do We “Evaluate”?

Model Selection and Evaluation Actuaries Have Built Many Sophisticated Models Based on Collective Risk Theory All Models Make Simplifying Assumptions How do we Evaluate Them? Fundamental Questions Answer Some

Fundamental Questions

How Well Does the Model Measure and Reflect the Uncertainty Inherent in the Data? Does the Model do a Good Job of Capturing and Replicating the Statistical Features Found in the Data?

Fundamental Questions How Well Does the Model Measure and Reflect the Uncertainty Inherent in the Data? Does the Model do a Good Job of Capturing and Replicating the Statistical Features Found in the Data?

Modeling Goals Is the Goal to Minimize the Range (or Uncertainty) that Results from the Model? Goal of Modeling is NOT to Minimize Process Uncertainty! Goal is to Find the Best Statistical Model, While Minimizing Parameter and Model Uncertainty.

Liability Estimates Probability How Do We “Evaluate”?

Model Selection & Evaluation Criteria Model Selection Criteria Model Reasonability Checks Goodness-of-Fit & Prediction Errors

Model Selection & Evaluation Criteria Model Selection Criteria Model Reasonability Checks Goodness-of-Fit & Prediction Errors Model Selection Criteria

Criterion 1: Aims of the Analysis – Will the Procedure Achieve the Aims of the Analysis?

Model Selection Criteria Criterion 1: Aims of the Analysis – Will the Procedure Achieve the Aims of the Analysis? Criterion 2: Data Availability – Access to the Required Data Elements? – Unit Record-Level Data or Summarized “Triangle” Data?

Model Selection Criteria Criterion 3: Non-Data Specific Modeling Technique Evaluation – Has Procedure been Validated Against Historical Data? – Verified to Perform Well Against Dataset with Similar Features? – Assumptions of the Model Plausible Given What is Known About the Process Generating this Data?

Model Selection Criteria Criterion 4: Cost/Benefit Considerations – Can Analysis be Performed Using Widely Available Software? – Analyst Time vs. Computer Time? – How Difficult to Describe to Junior Staff, Senior Management, Regulators, Auditors, etc.?

Model Selection & Evaluation Criteria Model Selection Criteria Model Reasonability Checks Goodness-of-Fit & Prediction Errors

Model Selection & Evaluation Criteria Model Selection Criteria Model Reasonability Checks Goodness-of-Fit & Prediction Errors Model Reasonability Checks

Criterion 5: Coefficient of Variation by Year – Should be Largest for Oldest (Earliest) Year

Model Reasonability Checks Criterion 5: Coefficient of Variation by Year – Should be Largest for Oldest (Earliest) Year Criterion 6: Standard Error by Year – Should be Smallest for Oldest (Earliest) Year (on a Dollar Scale)

Model Reasonability Checks Criterion 7: Overall Coefficient of Variation – Should be Smaller for All Years Combined than any Individual Year Criterion 8: Overall Standard Error – Should be Larger for All Years Combined than any Individual Year

Model Reasonability Checks Criterion 9: Correlated Standard Error & Coefficient of Variation – Should Both be Smaller for All LOBs Combined than the Sum of Individual LOBs Criterion 10: Reasonability of Model Parameters and Development Patterns – Is Loss Development Pattern Implied by Model Reasonable?

Model Reasonability Checks Criterion 11: Consistency of Simulated Data with Actual Data – Can you Distinguish Simulated Data from Real Data? Criterion 12: Model Completeness and Consistency – Is it Possible Other Data Elements or Knowledge Could be Integrated for a More Accurate Prediction?

Model Selection & Evaluation Criteria Model Selection Criteria Model Reasonability Checks Goodness-of-Fit & Prediction Errors

Model Selection & Evaluation Criteria Model Selection Criteria Model Reasonability Checks Goodness-of-Fit & Prediction Errors

Criterion 13: Validity of Link Ratios – Link Ratios are a Form of Regression and Can be Tested Statistically

Goodness-of-Fit & Prediction Errors Criterion 13: Validity of Link Ratios – Link Ratios are a Form of Regression and Can be Tested Statistically

Goodness-of-Fit & Prediction Errors

Criterion 13: Validity of Link Ratios – Link Ratios are a Form of Regression and Can be Tested Statistically Criterion 14: Standardization of Residuals – Standardized Residuals Should be Checked for Normality, Outliers, Heteroscedasticity, etc.

Standardized Residuals

Goodness-of-Fit & Prediction Errors Criterion 15: Analysis of Residual Patterns – Check Against Accident, Development and Calendar Periods

Standardized Residuals

Goodness-of-Fit & Prediction Errors Criterion 15: Analysis of Residual Patterns – Check Against Accident, Development and Calendar Periods Criterion 16: Prediction Error and Out-of-Sample Data – Test the Accuracy of Predictions on Data that was Not Used to Fit the Model

Goodness-of-Fit & Prediction Errors Criterion 17: Goodness-of-Fit Measures – Quantitative Measures that Enable One to Find Optimal Tradeoff Between Minimizing Model Bias and Predictive Variance Adjusted Sum of Squared Errors (SSE) Akaike Information Criterion (AIC) Bayesian Information Criterion (BIC)

Goodness-of-Fit & Prediction Errors Criterion 18: Ockham’s Razor and the Principle of Parsimony – All Else Being Equal, the Simpler Model is Preferable Criterion 19: Model Validation – Systematically Remove Last Several Diagonals and Make Same Forecast of Ultimate Values Without the Excluded Data

Develops formula for standard error of chain ladder development estimate Assumes the error terms (i.e., residuals) are Normally distributed. Assumes error terms are uniformly proportional (i.e., Homoscedastistic). Mack Model

Simulated distribution of aggregate reserve based on chain ladder method (it could be based on other methods) Utilizes the standard residuals for the simulation Does not require assumptions about the residuals Bootstrapping Model

Hands on Models And Now for Something Completely Different...