MULTIPLE TRIANGLE MODELLING ( or MPTF ) APPLICATIONS MULTIPLE LINES OF BUSINESS- DIVERSIFICATION? MULTIPLE SEGMENTS –MEDICAL VERSUS INDEMNITY –SAME LINE,

Slides:



Advertisements
Similar presentations
Copula Representation of Joint Risk Driver Distribution
Advertisements

SJS SDI_21 Design of Statistical Investigations Stephen Senn 2 Background Stats.
11-1 Empirical Models Many problems in engineering and science involve exploring the relationships between two or more variables. Regression analysis.
Computational Statistics. Basic ideas  Predict values that are hard to measure irl, by using co-variables (other properties from the same measurement.
Kin 304 Regression Linear Regression Least Sum of Squares
Copula Regression By Rahul A. Parsa Drake University &
Part 12: Asymptotics for the Regression Model 12-1/39 Econometrics I Professor William Greene Stern School of Business Department of Economics.
Probability & Statistical Inference Lecture 9
6-1 Introduction To Empirical Models 6-1 Introduction To Empirical Models.
Regression Analysis Using Excel. Econometrics Econometrics is simply the statistical analysis of economic phenomena Here, we just summarize some of the.
A Short Introduction to Curve Fitting and Regression by Brad Morantz
1-1 Regression Models  Population Deterministic Regression Model Y i =  0 +  1 X i u Y i only depends on the value of X i and no other factor can affect.
Common Factor Analysis “World View” of PC vs. CF Choosing between PC and CF PAF -- most common kind of CF Communality & Communality Estimation Common Factor.
Simple Linear Regression Analysis
Dealing with Heteroscedasticity In some cases an appropriate scaling of the data is the best way to deal with heteroscedasticity. For example, in the model.
Chapter 5 Transformations and Weighting to Correct Model Inadequacies
Correlation and Regression Analysis
Linear regression models in matrix terms. The regression function in matrix terms.
Simple Linear Regression Analysis
Separate multivariate observations
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Slides by JOHN LOUCKS & Updated by SPIROS VELIANITIS.
Regression and Correlation Methods Judy Zhong Ph.D.
ANCOVA Lecture 9 Andrew Ainsworth. What is ANCOVA?
Inference for regression - Simple linear regression
1 MULTI VARIATE VARIABLE n-th OBJECT m-th VARIABLE.
Alternative Measures of Risk. The Optimal Risk Measure Desirable Properties for Risk Measure A risk measure maps the whole distribution of one dollar.
Some matrix stuff.
1 1 Slide © 2007 Thomson South-Western. All Rights Reserved Chapter 13 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
Copyright © The McGraw-Hill Companies, Inc. Permission required for reproduction or display. 1 Part 4 Curve Fitting.
Ch4 Describing Relationships Between Variables. Pressure.
1 1 Slide © 2008 Thomson South-Western. All Rights Reserved Chapter 15 Multiple Regression n Multiple Regression Model n Least Squares Method n Multiple.
Reinsurance of Long Tail Liabilities Dr Glen Barnett and Professor Ben Zehnwirth.
LECTURE 2. GENERALIZED LINEAR ECONOMETRIC MODEL AND METHODS OF ITS CONSTRUCTION.
Module 1: Statistical Issues in Micro simulation Paul Sousa.
Ch4 Describing Relationships Between Variables. Section 4.1: Fitting a Line by Least Squares Often we want to fit a straight line to data. For example.
Summarizing Bivariate Data
Examining Relationships in Quantitative Research
Multiple Regression and Model Building Chapter 15 Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Lecture 8 Simple Linear Regression (cont.). Section Objectives: Statistical model for linear regression Data for simple linear regression Estimation.
Regression Regression relationship = trend + scatter
Chapter 4 Linear Regression 1. Introduction Managerial decisions are often based on the relationship between two or more variables. For example, after.
Chapter 13 Multiple Regression
Reserve Variability – Session II: Who Is Doing What? Mark R. Shapland, FCAS, ASA, MAAA Casualty Actuarial Society Spring Meeting San Juan, Puerto Rico.
Spatial Analysis & Geostatistics Methods of Interpolation Linear interpolation using an equation to compute z at any point on a triangle.
Simple Linear Regression In the previous lectures, we only focus on one random variable. In many applications, we often work with a pair of variables.
Chapter 2 Ordinary Least Squares Copyright © 2011 Pearson Addison-Wesley. All rights reserved. Slides by Niels-Hugo Blunch Washington and Lee University.
Trees Example More than one variable. The residual plot suggests that the linear model is satisfactory. The R squared value seems quite low though,
Chapter 8: Simple Linear Regression Yang Zhenlin.
LESSON 6: REGRESSION 2/21/12 EDUC 502: Introduction to Statistics.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Simple Linear Regression Analysis Chapter 13.
Tutorial I: Missing Value Analysis
1 HETEROSCEDASTICITY: WEIGHTED AND LOGARITHMIC REGRESSIONS This sequence presents two methods for dealing with the problem of heteroscedasticity. We will.
Intro to Statistics for the Behavioral Sciences PSYC 1900 Lecture 7: Regression.
Copyright © 2011 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin Multiple Regression Chapter 14.
Nonparametric Statistics
Lecture 7: Bivariate Statistics. 2 Properties of Standard Deviation Variance is just the square of the S.D. If a constant is added to all scores, it has.
Modelling Multiple Lines of Business: Detecting and using correlations in reserve forecasting. Presenter: Dr David Odell Insureware, Australia.
Statistics 350 Lecture 2. Today Last Day: Section Today: Section 1.6 Homework #1: Chapter 1 Problems (page 33-38): 2, 5, 6, 7, 22, 26, 33, 34,
The simple linear regression model and parameter estimation
Chapter 7. Classification and Prediction
Statistical Data Analysis - Lecture /04/03
1) A residual: a) is the amount of variation explained by the LSRL of y on x b) is how much an observed y-value differs from a predicted y-value c) predicts.
CHAPTER 29: Multiple Regression*
Inference about the Slope and Intercept
Inference about the Slope and Intercept
Hypothesis testing and Estimation
Goodness of Fit The sum of squared deviations from the mean of a variable can be decomposed as follows: TSS = ESS + RSS This decomposition can be used.
Undergraduated Econometrics
Simple Linear Regression
Presentation transcript:

MULTIPLE TRIANGLE MODELLING ( or MPTF ) APPLICATIONS MULTIPLE LINES OF BUSINESS- DIVERSIFICATION? MULTIPLE SEGMENTS –MEDICAL VERSUS INDEMNITY –SAME LINE, DIFFERENT STATES –GROSS VERSUS NET OF REINSURANCE –LAYERS CREDIBILITY MODELLING –ONLY A FEW YEARS OF DATA AVAILABLE –HIGH PROCESS VARIABILITY MAKES IT DIFFICULT TO ESTIMATE TRENDS

BENEFITS Level of Diversification- optimal capital allocation by LOB Mergers/Acquisitions Writing new business- how is it related to what is already on the books?

More information will be available at the St. James Room 4 th floor 7 – 11 pm. ICRFS-ELRF will be available for FREE!

Model Displays for LOB1 and LOB3

LOB1 and LOB3 Weighted Residual Plots for Calendar Year

Pictures shown above correspond to two linear models, which described by the following equations Without loss of sense and generality two models in (1) could be considered as one linear model:

For illustration of the most simple case we suppose that size of vectors y in models (1) are the same and equal to n, also we suppose that In this case Which could be rewritten as

For example, when n = 3

There is a big difference between linear models in (1) and linear model (2), as in (1) we consider models separately and could not use additional information, from dependency of these models, what we can do in model (2). To extract this additional information we need to use proper methods to estimate vector of parameters . The estimation which derived by ordinary least square (OLS) method, does not provide any advantage, as covariance matrix  is not participating in estimations. Only general least square (GLS) estimation could help to achieve better results.

However, it is necessary immediately to underline that we do not know elements of the matrix  and we have to estimate them as well. So, practically, we should build iterative process of estimations and this process will stop, when we reach estimations with satisfactory statistical properties.

There are some cases, when model (2) provides the same results as models in (1). They are: 1.Design matrices in (1) have the same structure ( they are the same or proportional to each other ). 2.Models in (1) are non-correlated, another words However in situation when two models in (1) have common regressors model (2) again will have advantages in spite of the same structure of design matrices.

Correlation and Linearity The idea of correlation arises naturally for two random variables that have a joint distribution that is bivariate normal. For each individual variable, two parameters a mean and standard deviation are sufficient to fully describe its probability distribution. For the joint distribution, a single additional parameter is required the correlation. If X and Y have a bivariate normal distribution, the relationship between them is linear: the mean of Y, given X, is a linear function of X ie:

The slope  is determined by the correlation , and the standard deviations and : The correlation between Y and X is zero if and only if the slope  is zero. Also note that, when Y and X have a bivariate normal distribution, the conditional variance of Y, given X, is constant ie not a function of X:

This is why, in the usual linear regression model Y =  +  X +  the variance of the "error" term  does not depend on X. However, not all variables are linearly related. Suppose we have two random variables related by the equation where T is normally distributed with mean zero and variance 1. What is the correlation between S and T ?

Linear correlation is a measure of how close two random variables are to being linearly related. In fact, if we know that the linear correlation is +1 or -1, then there must be a deterministic linear relationship Y =  +  X between Y and X (and vice versa). If Y and X are linearly related, and f and g are functions, the relationship between f( Y ) and g( X ) is not necessarily linear, so we should not expect the linear correlation between f( Y ) and g( X ) to be the same as between Y and X.

A common misconception with correlated lognormals Actuaries frequently need to find covariances or correlations between variables such as when finding the variance of a sum of forecasts (for example in P&C reserving, when combining territories or lines of business, or computing the benefit from diversification). Correlated normal random variables are well understood. The usual multivariate distribution used for analysis of related normals is the multivariate normal, where correlated variables are linearly related. In this circumstance, the usual linear correlation ( the Pearson correlation ) makes sense.

However, when dealing with lognormal random variables (whose logs are normally distributed), if the underlying normal variables are linearly correlated, then the correlation of lognormals changes as the variance parameters change, even though the correlation of the underlying normal does not.

All three lognormals below are based on normal variables with correlation 0.78, as shown left, but with different standard deviations.

We cannot measure the correlation on the log-scale and apply that correlation directly to the dollar scale, because the correlation is not the same on that scale. Additionally, if the relationship is linear on the log scale (the normal variables are multivariate normal) the relationship is no longer linear on the original scale, so the correlation is no longer linear correlation. The relationship between the variables in general becomes a curve:

Weighted Residual Plots for LOB1 and LOB3 versus Calendar Years What does correlation mean?

Model Displays for LOB1 and LOB3 for Calendar Years

Model for individual iota parameters

There are two types of correlations involved in calculations of reserve distributions. Weighted Residual Correlations between datasets: – is weighted residual correlation between datasets LOB1 and LOB3; Correlations in parameter estimates: – is correlation between iota parameters in LOB1 and LOB3. These two types of correlations induce correlations between triangle cells and within triangle cells.

Common iota parameter in both triangles

Two effects: Same parameter for each LOB increases correlations and CV of aggregates Single parameter with lower CV reduces CV of aggregates

Forecasted reserves by accident year, calendar year and total are correlated Indicates dependency through residuals’ and parameters’ correlations Indicates dependency through parameter estimate correlations only

Dependency of aggregates in aggregate table Var(Aggregate) >> Var(LOB1) + Var(LOB2). In each forecast cell and in aggregates by accident year and calendar year (and total) Correlation between reserve distributions is

Simulations from lognormals correlated within LOB and between LOBs

GROSS VERSUS NET Model Display -1 Is your outward reinsurance program optimal? 6% stable trend in calendar year

Model Display – 2 Zero trend in calendar year. Therefore ceding part that is growing!

Weighted Residual Covariances Between Datasets Weighted Residual Correlations Between Datasets

Weighted Residuals versus Accident Year Gross

Net

Weighted Residual Normality Plot

Accident Year Summary CV Net > CV Gross – Why? It is not good for cedant?

MODELING LAYERS Similar Structure Highly Correlated Surprise Finding! CV of reserves limited to $1M is the same as CV of reserves limited to $2M !

Model Display for All 1M: PL(I)

Model Display for All 2M: PL(I)

Model Display for All 1Mxs1M: PL(I) Note that All 1Mxs1M has zero inflation, and All 2M has lower inflation than All 1M, and distributions of parameters going forward are correlated

Residual displays vs calendar years show high correlations between three triangles.

Weighted Residual Covariances Between Datasets Weighted Residual Correlations Between Datasets

Compare Accident Year Summary Consistent forecasts based on composite model.

If we compare forecast by accident year for Limited 1M and limited 2M t is easy to see that CV is the same.

More information will be available at the St. James Room 4 th floor 7 – 11 pm. ICRFS-ELRF will be available for FREE!