INEQUALITY MEASUREMENT RECONSIDERED: LEAKY BUCKETS VERSUS COMPENSATING JUSTICE AN EXPERIMENTAL INVESTIGATION OF TRANSFERS WITH TRANSACTION COSTS By Eva.

Slides:



Advertisements
Similar presentations
Very simple to create with each dot representing a data value. Best for non continuous data but can be made for and quantitative data 2004 US Womens Soccer.
Advertisements

Public Goods and Tax Policy
“Students” t-test.
5.1 Real Vector Spaces.
Keystone Illustrations Keystone Illustrations next Set 10 © 2007 Herbert I. Gross.
CHAPTER 8 More About Estimation. 8.1 Bayesian Estimation In this chapter we introduce the concepts related to estimation and begin this by considering.
The Game of Algebra or The Other Side of Arithmetic The Game of Algebra or The Other Side of Arithmetic © 2007 Herbert I. Gross by Herbert I. Gross & Richard.
CHAPTER 2 Building Empirical Model. Basic Statistical Concepts Consider this situation: The tension bond strength of portland cement mortar is an important.
CmpE 104 SOFTWARE STATISTICAL TOOLS & METHODS MEASURING & ESTIMATING SOFTWARE SIZE AND RESOURCE & SCHEDULE ESTIMATING.
Sampling Distributions
Chapter 9: The Normal Distribution
LIAL HORNSBY SCHNEIDER
Algebra Problems… Solutions Algebra Problems… Solutions © 2007 Herbert I. Gross Set 4 By Herb I. Gross and Richard A. Medeiros next.
The Game of Algebra or The Other Side of Arithmetic The Game of Algebra or The Other Side of Arithmetic © 2007 Herbert I. Gross by Herbert I. Gross & Richard.
Appendix to Chapter 1 Mathematics Used in Microeconomics © 2004 Thomson Learning/South-Western.
External Costs and Benefits
2.5 Variances of the OLS Estimators
Developing Principles in Bargaining. Motivation Consider a purely distributive bargaining situation where impasse is costly to both sides How should we.
Evaluating Hypotheses
Value of Information Some introductory remarks by Tony O’Hagan.
Chapter 11 Multiple Regression.
Distributive Politics and Economic Growth Alberto Alesina and Dani Rodrik Economic Growth Spring Semester, 2009 Benedikte Fogh Larsen.
19 Costing Systems: Process Costing Principles of Accounting 12e
Inferences About Process Quality
1 External Costs. 2 Overview An externality is a situation where a third party is affected by an economic activity. The externality can be either positive.
Regarding the income distribution in the United States, we have: 0 of Too much inequality 2. Just the right amount of inequality 3. Not enough inequality.
LIAL HORNSBY SCHNEIDER
Probability Distributions W&W Chapter 4. Continuous Distributions Many variables we wish to study in Political Science are continuous, rather than discrete.
Chapter 33: Taxes: Equity versus Efficiency Copyright © 2013 by The McGraw-Hill Companies, Inc. All rights reserved. McGraw-Hill/Irwin 13e.
The Window Strategy with Options. Overview  The volatility of agricultural commodity prices makes marketing just as important as production.  Producers.
Copyright (c) 2000 by Harcourt, Inc. All rights reserved. Functions of One Variable Variables: The basic elements of algebra, usually called X, Y, and.
Copyright © Cengage Learning. All rights reserved. CHAPTER 11 ANALYSIS OF ALGORITHM EFFICIENCY ANALYSIS OF ALGORITHM EFFICIENCY.
Copyright © Cengage Learning. All rights reserved. 8 Tests of Hypotheses Based on a Single Sample.
ORDINARY DIFFERENTIAL EQUATION (ODE) LAPLACE TRANSFORM.
Lecture Presentation Software to accompany Investment Analysis and Portfolio Management Seventh Edition by Frank K. Reilly & Keith C. Brown Chapter 7.
1 CSI5388: Functional Elements of Statistics for Machine Learning Part I.
ESTIMATES AND SAMPLE SIZES
16-1 Copyright  2010 McGraw-Hill Australia Pty Ltd PowerPoint slides to accompany Croucher, Introductory Mathematics and Statistics, 5e Chapter 16 The.
PROBABILITY (6MTCOAE205) Chapter 6 Estimation. Confidence Intervals Contents of this chapter: Confidence Intervals for the Population Mean, μ when Population.
Lecture 12 Statistical Inference (Estimation) Point and Interval estimation By Aziza Munir.
Statistics for Managers Using Microsoft Excel, 5e © 2008 Pearson Prentice-Hall, Inc.Chap 8-1 Statistics for Managers Using Microsoft® Excel 5th Edition.
Extending the Definition of Exponents © Math As A Second Language All Rights Reserved next #10 Taking the Fear out of Math 2 -8.
Currently, the wealthiest 5 percent of all U. S
INTEGRALS The Fundamental Theorem of Calculus INTEGRALS In this section, we will learn about: The Fundamental Theorem of Calculus and its significance.
Using Futures & Options to Hedge Hedging is the creation of one risk to offset another risk. We will first look at the risk of being long in a currency;
Equations, Inequalities, and Mathematical Models 1.2 Linear Equations
Brian Macpherson Ph.D, Professor of Statistics, University of Manitoba Tom Bingham Statistician, The Boeing Company.
Statistics : Statistical Inference Krishna.V.Palem Kenneth and Audrey Kennedy Professor of Computing Department of Computer Science, Rice University 1.
March 23 & 28, Hashing. 2 What is Hashing? A Hash function is a function h(K) which transforms a key K into an address. Hashing is like indexing.
30-1 Copyright  2007 McGraw-Hill Australia Pty Ltd PPTs t/a Australian Financial Accounting 5e by Craig Deegan Slides prepared by Craig Deegan Chapter.
1 Consumer Choice and Demand CHAPTER 6 © 2003 South-Western/Thomson Learning.
Part 3 Linear Programming
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 4 Inverse, Exponential, and Logarithmic Functions Copyright © 2013, 2009, 2005 Pearson Education,
Introduction Many organizations use decision rules to alleviate incentive problems as opposed to incentive contracts The principal typically retains some.
Copyright © 2007 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Slide 8- 1.
Sampling Design and Analysis MTH 494 Lecture-21 Ossam Chohan Assistant Professor CIIT Abbottabad.
Principal Component Analysis
The inference and accuracy We learned how to estimate the probability that the percentage of some subjects in the sample would be in a given interval by.
The expected value The value of a variable one would “expect” to get. It is also called the (mathematical) expectation, or the mean.
The Normal Approximation for Data. History The normal curve was discovered by Abraham de Moivre around Around 1870, the Belgian mathematician Adolph.
Copyright © 2013, 2009, 2005 Pearson Education, Inc. 1 3 Polynomial and Rational Functions Copyright © 2013, 2009, 2005 Pearson Education, Inc.
Copyright © 2016 Brooks/Cole Cengage Learning Intro to Statistics Part II Descriptive Statistics Intro to Statistics Part II Descriptive Statistics Ernesto.
Statistical Decision Making. Almost all problems in statistics can be formulated as a problem of making a decision. That is given some data observed from.
Estimation and Confidence Intervals. Sampling and Estimates Why Use Sampling? 1. To contact the entire population is too time consuming. 2. The cost of.
Chapter 6 Continuous Random Variables Copyright © 2014 by The McGraw-Hill Companies, Inc. All rights reserved.McGraw-Hill/Irwin.
Virtual University of Pakistan
Chapter 12 Understanding Research Results: Description and Correlation
Undergraduated Econometrics
Chapter 6 Additional Integration Topics
Presentation transcript:

INEQUALITY MEASUREMENT RECONSIDERED: LEAKY BUCKETS VERSUS COMPENSATING JUSTICE AN EXPERIMENTAL INVESTIGATION OF TRANSFERS WITH TRANSACTION COSTS By Eva Camacho-Cuena, Universidad Autònoma Madrid Tibor Neugebauer, University of Luxembourg Christian Seidl, Universität Kiel

Okun on Leaky Buckets Okun started out with an approval of the pure transfer principle, but remarked that in reality “the money must be carried from the rich to the poor in a leaky bucket. Some of it will simply disappear in transit, so the poor will not receive all the money that is taken from the rich.” [Okun (1975, p. 91).] Okun (1975, p. 91) put the following question to an outside ethical observer: “I shall not try to measure the leak now, because I want you to decide how much leakage you would accept and still support the Tax-and Transfer Equalization Act.” Okun (1975, p. 94) states his own view: “Since I feel obliged to play the far- fetched games that I make up, I will report that I would stop at the leakage of 60 percent in this particular example.” Okun (1975, p. 94) states that the tolerated leakage is an increasing function of the income level of the transferors: “If the proposed tax were to be imposed only on the handful of wealthiest American families with annual incomes above $ 1 million, you might well support the equalization up to a much bigger leakage. In fact, some people would wish to take money away from the super-rich even if not one cent reached the poor.”

The Transfer Principle with Transaction Costs Progressive transfers of incomes cause income distributions to become more equally distributed. Hence, income inequality measures decrease and aggregate welfare increases in response to progressive transfers. Suppose transfers incur transaction costs: What is the maximum leakage of transaction costs such that a transfer still “pays at the margin” [in terms of leaving the degree of income inequality or aggregate social welfare intact]? This is a generalization of the transfer principle for transaction costs.

The Leaky-Bucket Paradox Leaky-Bucket Inconsistency: It is generally wrong to expect that some fraction of the transfer has to arrive at the transferee when the degree of income inequality should be maintained [Seidl (2001), Hoffmann (2001), Lambert and Lanza (2006)]. Rather there exists a unique benchmark, such that this conjecture holds only for transfers below this benchmark. For transfers above this benchmark the transferee has to receive a higher amount than the transfer. For transfers across this benchmark the “transferee” has even to lose some money to maintain the same degree of income inequality.

The Universe of Pairwise Income Transactions Transfers may be regressive rather than progressive. Income earners may experience income increases rather than decreases. This yields a fourfold pattern of cases. Traditional research has focused only on the one case of progressive transfers.

Theory The income inequality measure is an indicator of income inequality. For completely equal incomes it assumes the value zero. Increases in inequality are indicated by higher values of the income inequality measure. Thus, I(. ) is an increasing function of income inequality.

The existence and the properties of benchmarks were first noticed by Seidl (2001) and Hoffmann (2001). A more comprehensive follow-up study is due to Lambert and Lanza (2006). Hoffmann (2001) uses the benchmark as some kind of poverty line to separate the “relatively poor” and the “relatively rich”. As the benchmark depends both on the income distribution and the inequality measure applied, this would make the poverty line dependent on the inequality measure applied.

We now consider the basic result of the leaky bucket theory. It holds for differentiable, inequality averse and scale or translation invariant inequality measures. It will be experimentally tested. Case of progressive transfersSame conditions Change of inequality sign

Nota bene: compensating justice [as observed in our experiments] is at variance both with the transfer principle and with leaky-bucket theory!

Experimental Design Our experimental design consisted of the income distribution (500€, 750€, 1000€, 1250€, 1500€, 1750€, 2000€), where the incomes refer to monthly net incomes of seven equally numerous groups of income recipients. The income distribution was presented in the upper half of a computer screen. Upon touching a key, 100€ were either added or subtracted from one income at random. This was shown in the lower half of the computer screen (next slide). This gave us 7  6=42 combinations both for +100€ and -100€, that is, 84 combinations. Each subject responded to these 84 combinations (presented in a random order). There were no material incentives. The subjects were asked to adapt the second income in the lower half of the screen such that the degree of income inequality within this society should stay put.

Figure 1: The Experimental Design: Screenshot

The Numerical Benchmarks By Theorem 10 the numerical benchmarks are implicitly defined by setting the partial derivatives of an income inequality measure equal to zero. The partial derivatives are functions of the income distribution and the parameters of the inequality measure. As the income distribution is given, the partial derivatives are functions of the inequality parameter alone. We use three income inequality measures: entropy, extended Gini, generalized Atkinson. Thus the benchmarks y* can be plotted for the domain of the inequality parameters. This is shown in Figure 2.

Benchmark Functions for Note that the Atkinson inequality measure and the entropy inequality measure are just mirror images. An appropriate definition of the respective parameters can show that.

Table 1 shows the parameter values for which the respective entries of the income distributions become the benchmarks. This table demonstrates that the parameter values for some entries at the bottom and at the top of the income distributions are completely implausible. Thus, Table 1 allows us to restrict the space in which y* may be located to the interval: 750< y*<1500.

Results

Compare what leaky-bucket theory asks for: (a) Negative responses  for  >0 and positive responses for  <0. Opposite in Table 2. (b) Positive responses  for  >0 and negative responses for  y k ). (c) For benchmarks symmetrically distributed around 1250€,  should be around zero. But Table 2 exhibits positive responses for  >0 and negative responses for  <0. Rather we observe compensating justice: Gains should entail gains, and losses should entail losses. Moreover, the focus of compensating justice rests on the richer party for income gains (the poorer party gets more) and on the poorer party for income losses (the richer party loses more). 

Table 3 focuses on the numbers and percentages of responses. For income gains, positive income compensation dominates. For income losses, negative income compensation dominates. The focus of compensating justice rests on the richer party for income gains, and on the poorer party for income losses. If the poorer party experiences an income gain, richer persons receive income gains, too, but they are less than before (60.05% versus 76.98%). If the richer party experiences an income loss, poorer persons experience income losses, too, but they are less than before (52.50% versus 73.49%). Note, however, that the basic tendency remains intact.

The second lines of the cells give the percentages of the same sides, opposite sides and unknown benchmarks. We do not observe diverse responses in the second lines of the cells of this table. Rather we observe a simple common pattern: income gains of one income recipient should be matched by income gains of the other involved income recipient to maintain inequality neutrality, and income losses should be matched by income losses of the other involved income recipient to maintain inequality neutrality. This is simple compensated justice. For δ>0 the poorer party should get more, for δ<0 the richer party should lose more. This is graded compensating justice %

Compensating Justice In order to capture these findings in quantitative terms, we estimated several equations. We finally settled on a logarithmic equation because all parameters with the exception of one are significant at the 10% significance level. Other equations used dummies for the relative position of the income recipient whose income was originally affected. But these equations, while they are more complicated and more difficult to interpret, did not provide a much better fit. Hence we estimated

Table 4: Compensating Justice The richer party perceives a gain y j -y k >0 The poorer party perceives a loss y j -y k <0 Examples: When y j =2000 receives 1€, then y k =500 should receive 0.886€. When y j =500 loses 1€, then y k =2000 should lose 1.122€. When y j =2000 receives 1€, then y k =1750 should receive €. When y j =1750 loses 1€, then y k =2000 should lose 0.379€. When y j =750 receives 1€, then y k =500 should receive €. When y j =500 loses 1€, then y k =750 should lose €.

Conclusion