We think you have liked this presentation. If you wish to download it, please recommend it to your friends in any social system. Share buttons are a little bit lower. Thank you!
Presentation is loading. Please wait.
Published byJorge Holway
Modified about 1 year ago
©Towers Perrin Emmanuel Bardis, FCAS, MAAA Cane Fall 2005 meeting Stochastic Reserving and Reserves Ranges Fall 2005 This document was designed for discussion purposes only. It is incomplete, and not intended to be used, without the accompanying oral presentation and discussion.
©Towers Perrin 2 Presentation components Deterministic vs. Stochastic methods and ASOP #36 Various types of stochastic models Criteria for model selection Aggregate claim liabilities distributions Range of reasonable estimates and materiality standards
©Towers Perrin 3 How actuaries currently handle reserve ranges?
©Towers Perrin 4 Deterministic vs. Stochastic Methods Deterministic methods provide the best estimate of claim liabilities Stochastic methods are more informative than deterministic methods Produce a full distribution of possible outcomes in addition to the best reserve estimate Provide basis for evaluating range of reasonable estimates and RMADs Provide confidence levels of held reserves Consider the volatility of the reserves for each individual line, together with the correlation of losses across the various lines
©Towers Perrin 5 Why such a limited use of stochastic methods? A general lack of understanding of the methods Lack of flexibility of the methods, including lack of suitable software Lack of immediate need -- why bother when traditional methods suffice for the calculation of a best estimate? Lack of a clear guidance (accounting or actuarial) on how to calculate reserve estimates SSAP # 55 accounting guidance says that the held reserves must be “management’s best estimate” (?) of the actual liabilities
©Towers Perrin 6 Guidance according the ASOP #36 According to ASOP #36 “Statement of Actuarial Opinion Regarding Property/Casualty Loss and LAE Reserves” “In estimating the reasonableness of the reserves, the actuary should consider one or more expected value estimates of the reserves, except when such estimates cannot be made based on available data and reasonable assumptions” Translation: We must find different ways to calculate the expected value (i.e. the mean) of the unknown distribution. The chance of getting an expected value equal to the actual claim liability amount is virtually zero.
©Towers Perrin 7 More ASOP #36… “Other statistical values such as the mode or the median (50 th percentile) may not be appropriate measures for evaluating loss and LAE reserves, such as when the expected value estimates can be significantly greater than these other estimates” Translation: That’s easy! Actuaries must be conservative “The actuary must use various methods to arrive at expected estimates… it is not necessary to estimate or determine the range of all possible values” Translation: We are, in essence, asked to concentrate our efforts on calculating the statistical mean, not the associated distribution
©Towers Perrin 8 ASOP#36 concludes “A range of reasonable estimates… could be produced by appropriate actuarial methods… The actuary may include risk margins in a range of reasonableness estimates… a range… however, usually does not represent the range of all possible outcomes” Translation: Look at various distributions and select among them. ASOP here is unclear on distinguishing between: The range of Best Estimates and The range of actual outcomes
©Towers Perrin 9 Calculation of ranges employing multiple projection methods Best estimate Method #1 The actual distribution has a wider range Best estimate Method #2
©Towers Perrin 10 Stochastic Theory and Various Types of Models
©Towers Perrin 11 All risks that contribute to the uncertainty of claim liabilities estimates Actual claim liabilities Model EstimateExpected claim liabilities Process Risk i.e. fair die Parameter Risk i.e. unfair die Model Risk i.e. 6 unfair dice Total Risk
©Towers Perrin 12 ”Chain Ladder” type of models Example: Mack model assumptions: I. The expected loss amount at time n, is equal to the product of the known paid loss amount through time (n-1) times a “true” unknown loss development factor II. No correlation among accident years exists III. The variability of the link ratios is inversely proportional to the magnitude of the loss amounts The Mack model provides the Mean and “Standard error” of the claim liabilities, where: Standard error = E[(Actual–Model Estimate) 2 | Triangle]
©Towers Perrin 13 “Simulation” type of models Bootstrapping is a powerful, yet simple, technique that employs simulations, avoiding the fit of complex models The chain ladder method (CLM) produces identical reserve estimates to a Generalized Linear Model model (OPD) Incremental fitted payments from the CLM are compared to historical increments, producing residuals Regression residuals are approximately independent and identically distributed around zero Bootstrapping involves sampling with replacement of the residuals. The simulated residuals produce forecasted incremental payments
©Towers Perrin 14 “Incremental” type of models These modes employ log-incremental rather than cumulative payments They fit curves across accident, development and payment years producing “parsimonious” models The statistical framework allows the user of the model to test the significance of the parameters Goodness of Fit tests allow the user to “tailor” the model parameters to fit the characteristics of the data Examples are the Christofides “log-incremental” model and the “ICRF’s” model
©Towers Perrin 15 Graphs for Incremental models
©Towers Perrin 16 Bayesian type of models Incremental models are fitted to historical data and produce “best fitted” parameters. Future payments are then calculated based on these parameters Bayesian models assume instead a “prior” distribution of the parameters Based on the Bayes Theorem and the historical data, a “posterior” distribution of the parameters is produced Monte Carlo simulations produce the distribution of the parameters and the future payments Bayesian models incorporate the informed judgment of the model’s users
©Towers Perrin 17 Criteria for model selection Data: Are the model assumptions satisfied by the data? Do correlation across accident years exist? Is there any negative loss development present? Cost/Benefit considerations: What is the marginal benefit of complicated models? What is the cost of “specialist” software? How difficult is to explain to management? Reasonability checks: Standard errors should increase for immature years Goodness of Fit: find a model that best “fits” the data Complexity must be appropriately penalized
©Towers Perrin 18 Aggregate Claim Liabilities Distribution
©Towers Perrin 19 What about aggregate distributions? The 75 th percentile of the combined distribution of two lines of business is generally NOT the sum of the 75 th percentiles of the individual distributions The former is true only in the case of perfect correlation between the two lines. This is very unlikely! Generally the aggregate distribution is less risky than the sum of the risk of the parts The volatility of the aggregate distribution increases: By the volatility of the individual lines The correlation between the lines
©Towers Perrin 20 Impact to the 75 th percentile of the aggregate distribution
©Towers Perrin 21 Theory of Copulas Copulas provide a convenient way to express the aggregate distributions of several random variables Copula components: The distributions of individual random variables Correlations of these variables Correlation coefficients measure the overall strength of association across various distributions Copulas can vary that degree of association over the various parts of the aggregate distribution Example: for workers comp and property losses the correlation is higher in the tail of the distribution
©Towers Perrin 22 Comparison of Copulas
©Towers Perrin 23 Range of Reasonable Estimates and Standards of Materiality
©Towers Perrin 24 “Standards” for comparing actuarial estimates “Statistical” materiality in an actuarial/statistical context: Is the difference between two actuarial estimates significant different from each other? Parameter risk is relevant here. We are concerned with the variability of the “Expected” claim liabilities only “Financial” materiality in a financial reporting context: Would users of financial statements draw different conclusions if a different reserve estimate is booked in the financial statement? Total risk is relevant here. We are concerned with the variability of the “Actual” claim liabilities
©Towers Perrin 25 We follow a “null hypothesis” testing approach Carried reserves Type I error Significance level Type I error Significance level Lower Materiality standard Upper Materiality standard
©Towers Perrin 26 “Statistical” materiality standards at a 7.5% significance level The coefficient of variation (CV) of claim liabilities is proportional to the “inherent” risk of a line of business The range implied by the statistical standards is proportional to the volatility of the line of business “Financial” materiality standards should consider the risk of the claim liabilities in comparison to Surplus and Net Income
©Towers Perrin 27 A little diversion: Financial vs Statistical standards Even if the carried and indicated reserves are “statistically” indistinguishable from one another the risk of material adverse deviation still remains! It happens in the case where: Upper “statistical “ range - carried reserves >= Financial materiality standard The Upper statistical range is produced by “appropriate” actuarial methods and sets of assumptions… Carried reserves Lower Statistical range Upper Statistical range Financial Materiality Standard + Carried reserves
Lecture 20 Missing Data and random effect modelling.
1 Confidential – Highly Restricted Copula Representation of Joint Risk Driver Distribution Why Copulas? Copulas provide a method of joining together individual.
Course round-up subtitle- Statistical model building Marian Scott University of Glasgow Glasgow, Aug 2012.
On the Mathematics and Economics Assumptions of Continuous-Time Models
Effect Sizes and Power Review. Statistical Power Statistical power refers to the probability of finding a particular sized effect Specifically, it is.
Sampling and monitoring the environment Marian Scott Sept 2006.
The Kyscope A kaleidoscope for assessing business income taxation options Press the slide show button in the bar below this slide, sit back and watch the.
© Negnevitsky, Pearson Education, Introduction, or what is uncertainty? Introduction, or what is uncertainty? Basic probability theory Basic probability.
1 by L Goel Professor & Head of Division of Power Engineering School of Electrical & Electronic Engineering Nanyang Technological University,
Statistical model building Marian Scott and Ron Smith Dept of Statistics, University of Glasgow, CEH Glasgow, Aug 2008.
Cointegration and Error Correction Models. Introduction Assess the importance of stationary variables when running OLS regressions. Describe the Dickey-Fuller.
Introduction to Bayesian inference and computation for social science data analysis Nicky Best Imperial College, London
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlikeLicense. Your use of this material constitutes acceptance of that license.
PLANNING THE AUDIT Individual audits must be properly planned to ensure: Appropriate and sufficient evidence is obtained to support the auditors opinion;
Meta-analysis The EBM workshop A.A.Haghdoost, MD; PhD of Epidemiology
Building an Emulator. EGU short course – session 22 Outline Recipe for building an emulator – MUCM toolkit Screening – which simulator inputs matter Design.
The Pricing Of Risk Understanding the Risk Return Relation.
1 Benefits Transfer and Meta Analysis Professor Anil Markandya Department of Economics and International Development University of Bath
Value-at-Risk: A Risk Estimating Tool for Management February 24, 2000 David Dudley Federal Reserve Bank of New York.
Use of a DFA Model to Evaluate Reinsurance Programs Case Study Presented by: Robert F. Conger, FCAS Tillinghast – Towers Perrin 1999 CAS Seminar on Financial.
Introduction to Hypothesis Testing. What is a Hypothesis Test? A hypothesis test is a statistical method that uses sample data to evaluate a hypothesis.
The Portfolio Effect Reconsidered Christian Smart, Ph.D., CCE/A MCR, LLC Presented to the Huntsville Regional Chapter of the International Council on Systems.
1 American Accounting Association August 2005 Katherine Schipper Financial Accounting Standards Board The views expressed in this presentation are my own.
1 RISK MANAGEMENT Introduction Objectives Risk Strategies.
DTC Verification Workshop - Boulder April Inference Methods Ian Jolliffe Inference Methods Ian Jolliffe.
Statistical inference Ian Jolliffe University of Aberdeen CLIPS module 3.4b.
MULTIPLE REGRESSION ANALYSIS. ENGR. DIVINO AMOR P. RIVERA STATISTICAL COORDINATION OFFICER I NSO LA UNION CONTENTS Table for the types of Multiple Regression.
21-1 Copyright 2010 McGraw-Hill Australia Pty Ltd PowerPoint slides to accompany Croucher, Introductory Mathematics and Statistics, 5e Chapter 21 Hypothesis.
McGraw-Hill/Irwin © The McGraw-Hill Companies 2010 Audit Sampling: An Overview and Application to Tests of Controls Chapter Eight.
Toolkit: Approaches to Private Participation in Water Services Module 6 Allocating Responsibilities and Risks.
© 2016 SlidePlayer.com Inc. All rights reserved.