Download presentation

Presentation is loading. Please wait.

Published byJorge Holway Modified over 2 years ago

1
©Towers Perrin Emmanuel Bardis, FCAS, MAAA Cane Fall 2005 meeting Stochastic Reserving and Reserves Ranges Fall 2005 This document was designed for discussion purposes only. It is incomplete, and not intended to be used, without the accompanying oral presentation and discussion.

2
©Towers Perrin 2 Presentation components Deterministic vs. Stochastic methods and ASOP #36 Various types of stochastic models Criteria for model selection Aggregate claim liabilities distributions Range of reasonable estimates and materiality standards

3
©Towers Perrin 3 How actuaries currently handle reserve ranges?

4
©Towers Perrin 4 Deterministic vs. Stochastic Methods Deterministic methods provide the best estimate of claim liabilities Stochastic methods are more informative than deterministic methods Produce a full distribution of possible outcomes in addition to the best reserve estimate Provide basis for evaluating range of reasonable estimates and RMADs Provide confidence levels of held reserves Consider the volatility of the reserves for each individual line, together with the correlation of losses across the various lines

5
©Towers Perrin 5 Why such a limited use of stochastic methods? A general lack of understanding of the methods Lack of flexibility of the methods, including lack of suitable software Lack of immediate need -- why bother when traditional methods suffice for the calculation of a best estimate? Lack of a clear guidance (accounting or actuarial) on how to calculate reserve estimates SSAP # 55 accounting guidance says that the held reserves must be “management’s best estimate” (?) of the actual liabilities

6
©Towers Perrin 6 Guidance according the ASOP #36 According to ASOP #36 “Statement of Actuarial Opinion Regarding Property/Casualty Loss and LAE Reserves” “In estimating the reasonableness of the reserves, the actuary should consider one or more expected value estimates of the reserves, except when such estimates cannot be made based on available data and reasonable assumptions” Translation: We must find different ways to calculate the expected value (i.e. the mean) of the unknown distribution. The chance of getting an expected value equal to the actual claim liability amount is virtually zero.

7
©Towers Perrin 7 More ASOP #36… “Other statistical values such as the mode or the median (50 th percentile) may not be appropriate measures for evaluating loss and LAE reserves, such as when the expected value estimates can be significantly greater than these other estimates” Translation: That’s easy! Actuaries must be conservative “The actuary must use various methods to arrive at expected estimates… it is not necessary to estimate or determine the range of all possible values” Translation: We are, in essence, asked to concentrate our efforts on calculating the statistical mean, not the associated distribution

8
©Towers Perrin 8 ASOP#36 concludes “A range of reasonable estimates… could be produced by appropriate actuarial methods… The actuary may include risk margins in a range of reasonableness estimates… a range… however, usually does not represent the range of all possible outcomes” Translation: Look at various distributions and select among them. ASOP here is unclear on distinguishing between: The range of Best Estimates and The range of actual outcomes

9
©Towers Perrin 9 Calculation of ranges employing multiple projection methods Best estimate Method #1 The actual distribution has a wider range Best estimate Method #2

10
©Towers Perrin 10 Stochastic Theory and Various Types of Models

11
©Towers Perrin 11 All risks that contribute to the uncertainty of claim liabilities estimates Actual claim liabilities Model EstimateExpected claim liabilities Process Risk i.e. fair die Parameter Risk i.e. unfair die Model Risk i.e. 6 unfair dice Total Risk

12
©Towers Perrin 12 ”Chain Ladder” type of models Example: Mack model assumptions: I. The expected loss amount at time n, is equal to the product of the known paid loss amount through time (n-1) times a “true” unknown loss development factor II. No correlation among accident years exists III. The variability of the link ratios is inversely proportional to the magnitude of the loss amounts The Mack model provides the Mean and “Standard error” of the claim liabilities, where: Standard error = E[(Actual–Model Estimate) 2 | Triangle]

13
©Towers Perrin 13 “Simulation” type of models Bootstrapping is a powerful, yet simple, technique that employs simulations, avoiding the fit of complex models The chain ladder method (CLM) produces identical reserve estimates to a Generalized Linear Model model (OPD) Incremental fitted payments from the CLM are compared to historical increments, producing residuals Regression residuals are approximately independent and identically distributed around zero Bootstrapping involves sampling with replacement of the residuals. The simulated residuals produce forecasted incremental payments

14
©Towers Perrin 14 “Incremental” type of models These modes employ log-incremental rather than cumulative payments They fit curves across accident, development and payment years producing “parsimonious” models The statistical framework allows the user of the model to test the significance of the parameters Goodness of Fit tests allow the user to “tailor” the model parameters to fit the characteristics of the data Examples are the Christofides “log-incremental” model and the “ICRF’s” model

15
©Towers Perrin 15 Graphs for Incremental models

16
©Towers Perrin 16 Bayesian type of models Incremental models are fitted to historical data and produce “best fitted” parameters. Future payments are then calculated based on these parameters Bayesian models assume instead a “prior” distribution of the parameters Based on the Bayes Theorem and the historical data, a “posterior” distribution of the parameters is produced Monte Carlo simulations produce the distribution of the parameters and the future payments Bayesian models incorporate the informed judgment of the model’s users

17
©Towers Perrin 17 Criteria for model selection Data: Are the model assumptions satisfied by the data? Do correlation across accident years exist? Is there any negative loss development present? Cost/Benefit considerations: What is the marginal benefit of complicated models? What is the cost of “specialist” software? How difficult is to explain to management? Reasonability checks: Standard errors should increase for immature years Goodness of Fit: find a model that best “fits” the data Complexity must be appropriately penalized

18
©Towers Perrin 18 Aggregate Claim Liabilities Distribution

19
©Towers Perrin 19 What about aggregate distributions? The 75 th percentile of the combined distribution of two lines of business is generally NOT the sum of the 75 th percentiles of the individual distributions The former is true only in the case of perfect correlation between the two lines. This is very unlikely! Generally the aggregate distribution is less risky than the sum of the risk of the parts The volatility of the aggregate distribution increases: By the volatility of the individual lines The correlation between the lines

20
©Towers Perrin 20 Impact to the 75 th percentile of the aggregate distribution

21
©Towers Perrin 21 Theory of Copulas Copulas provide a convenient way to express the aggregate distributions of several random variables Copula components: The distributions of individual random variables Correlations of these variables Correlation coefficients measure the overall strength of association across various distributions Copulas can vary that degree of association over the various parts of the aggregate distribution Example: for workers comp and property losses the correlation is higher in the tail of the distribution

22
©Towers Perrin 22 Comparison of Copulas

23
©Towers Perrin 23 Range of Reasonable Estimates and Standards of Materiality

24
©Towers Perrin 24 “Standards” for comparing actuarial estimates “Statistical” materiality in an actuarial/statistical context: Is the difference between two actuarial estimates significant different from each other? Parameter risk is relevant here. We are concerned with the variability of the “Expected” claim liabilities only “Financial” materiality in a financial reporting context: Would users of financial statements draw different conclusions if a different reserve estimate is booked in the financial statement? Total risk is relevant here. We are concerned with the variability of the “Actual” claim liabilities

25
©Towers Perrin 25 We follow a “null hypothesis” testing approach Carried reserves Type I error Significance level Type I error Significance level Lower Materiality standard Upper Materiality standard

26
©Towers Perrin 26 “Statistical” materiality standards at a 7.5% significance level The coefficient of variation (CV) of claim liabilities is proportional to the “inherent” risk of a line of business The range implied by the statistical standards is proportional to the volatility of the line of business “Financial” materiality standards should consider the risk of the claim liabilities in comparison to Surplus and Net Income

27
©Towers Perrin 27 A little diversion: Financial vs Statistical standards Even if the carried and indicated reserves are “statistically” indistinguishable from one another the risk of material adverse deviation still remains! It happens in the case where: Upper “statistical “ range - carried reserves >= Financial materiality standard The Upper statistical range is produced by “appropriate” actuarial methods and sets of assumptions… Carried reserves Lower Statistical range Upper Statistical range Financial Materiality Standard + Carried reserves

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google