Presentation is loading. Please wait.

Presentation is loading. Please wait.

Prioritarianism and Climate Change Matthew Adler, Duke University LSE, MSU Workshop June, 2015.

Similar presentations


Presentation on theme: "Prioritarianism and Climate Change Matthew Adler, Duke University LSE, MSU Workshop June, 2015."— Presentation transcript:

1 Prioritarianism and Climate Change Matthew Adler, Duke University LSE, MSU Workshop June, 2015

2 Overview of Talk The social welfare function (SWF) framework The prioritarian SWF (presented, and contrasted with the utilitarian SWF) Prioritarianism under risk: “Ex ante” versus “ex post” prioritarianism Possible implications for climate change: research questions? This talk based upon joint work with Nicolas Treich, Adler/Treich (2015), although he should not be held responsible for everything I say here!

3 The SWF Framework An interpersonally comparable well-being function w(.) and a rule E whereby outcomes are ranked: outcome x is at least as good as y (x ≽ E y) iff (w 1 (x), w 2 (x), …, w N (x)) E (w 1 (y), w 2 (y), …, w N (y)) Actions (policy choices) are in turn ranked in light of the outcome ranking. SWFs originate in theoretical welfare economics; now used in optimal tax theory, public finance, environmental economics (including climate change), etc. Adler, Well-Being and Fair Distribution (2012); Adler and Fleurbaey, Oxford Handbook of Well-Being and Public Policy (forthcoming), provide overviews

4 The SWF as a tool for ethical decisionmaking I view the SWF as a tool for ethical (moral, “social”) decisionmaking. Specifying an SWF means making ethical choices. Descriptive facts don’t settle normative questions. (Hume et al: no “ought” from “is.”) Moreover, since individuals in any modern, pluralistic, society have diverse ethical views, it’s problematic (I think) to see society having a single SWF, or to try to “infer” what that SWF is. The electoral process determines whose ethical views get to influence policy (for now, until the next election). Bergson: “A notable feature in welfare economics is the attempt to formulate a criterion of social welfare without recourse to controversial ethical premises… [T]his goal for the criterion is an illusion.” Harsanyi: “This function W i that individual i will use in evaluating social situations from a moral point of view will be called his social welfare function.” That said, ethical thinking is not wholly unstructured. The axiomatic method of welfare economics/social choice theory is hugely valuable in sorting through the “space” of SWFs and well-being measures; and contemporary moral philosophy, in bringing to light sophisticated arguments for different possible approaches.

5 Normative Choices in Elaborating the SWF Framework What is the well-being measure w(.)? Preferences, happiness, objective goods. What is the ethical rule E for ranking well-being vectors? Utilitarian, Prioritarian, other Application under risk (well-defined probabilities) Utilitarianism, ex ante prioritarianism, ex post prioritarianism, “transformed” versions Deep uncertainty Variable populations E.g., Parfit’s “repugnant conclusion” Ethics versus self-interest. Sidgwick. To what extent is it reasonable for the current generation to give priority to its own interests, even though an ethical perspective requires impartiality vis a vis the future?

6 The Well-Being Measure Let c denote an individual’s consumption of marketed goods; a her non- market attributes (e.g., health, environmental quality); and R her preferences over (c, a) bundles. Different forms for w(.): – Objective good/”capability”: w(c, a, R) = o(c, a) – Happiness: w(c, a, R) = h(c, a, R) – Preference-based Simple case: Individuals have the same preferences. Very often assumed by economists who use SWFs. w(c, a, R) = u R (c, a), with u R (.) a vNM utility function representing the common preferences R—or F(u R (c, a)), F(.) increasing, in more general formulation. Heterogeneous preferences. Harsanyi’s concept of “extended preferences”. w(c, a, R) = s(R)u R (c, a) + t(R), with s(.) and t(.) scaling factors for the various vNM functions—or F(s(R)u R (c, a) + t(R)) Complex mixture of normative and descriptive. Although the choice between these views is normative, the application of specific views will in turn depend on empirical facts. On the preference view, well-being depends upon individuals’ preferences; and it’s in turn an empirical question what those preferences are.

7 The Rule E These formulas assume fixed population: N(x) = N(y) = N for all outcomes Utilitarian: x at least as good as y iff ∑w i (x) ≥ ∑w i (y) Prioritarian: x at least as good as y iff ∑g(w i (x)) ≥ ∑g(w i (y)), with g(.) strictly increasing and concave. Prioritarians give greater weight to well-being changes affecting worse off individuals. Vibrant philosophical literature starting with Parfit (1991). Key axiomatic difference from utilitarianism is Pigou-Dalton principle. PD principle says that (1, 3, 8, 10) is ethically preferred to (1, 2, 9, 10). Atkinson SWF: (1-γ) -1 ∑w i 1-γ, with γ an inequality aversion parameter >0 (becomes utilitarian at 0, and leximin at ∞)

8 The Prioritarian SWF

9 Axioms for E Pareto superiority: (3, 4, 10, 13) ≻ (3, 4, 10, 12) Anonymity/Impartiality: (7, 12, 4, 60) ∼ (12, 60, 4, 7) Pigou-Dalton: (1, 3, 8, 10) ≻ (1, 2, 9, 10) Separability: (7, 100, 100, 7) ≽ (4, 100, 100, 12) iff (7, 7, 7, 7) ≽ (4, 7, 7, 12) Continuity: If (1, 3, 50000, 50000) ≻ (1, 3, 6, 8), then (1, 3±ε, 50000, 50000) ≻ (1, 3, 6, 8) for ε sufficiently small Ratio rescaling invariance: (10, 12, 17, 20) ≽ (10, 10, 20, 20) iff (50, 60, 85, 100) ≽ (50, 50, 100, 100)

10

11 Three Justificatory Schemes Different ways to choose among SWFs, consistent with the “separateness of persons” The veil of ignorance. Harsanyi versus Rawls. The Harsanyi VOI (with a risk neutrality assumption) yields utilitarianism. x ≽ M y iff (1/N, (x 1); … ; 1/N, (x N)) at least as good as (1/N, (y 1); … ; 1//N, (y N)) iff 1/N w 1 (x) + …+ 1/N w N (x) ≥ 1/N w 1 (y) + …+ 1/N w N (y) Temkin’s complaints (“claims within outcomes”). In each outcome, worse-off individuals have complaints against better-off individuals (modulo resp.). The ranking of outcomes (in light of persons’ interests) balances complaint minimization versus overall well-being. Adler “claims across outcomes” (building on Nagel). As between two outcomes, each individual has a claim in favor of the outcome in which she is better off. The ranking of outcomes (in light of persons’ interests) balances these claims.

12 Claims across Outcomes versus Complaints (claims within outcomes) x y Amy 10 Betty 10 Mark Z 90 100 z w Amy 5 5 Betty 60 70 Mark Z 90 80 Why believe that the balancing of overall well-being and complaint minimization necessarily yields the Pareto superior outcome? By contrast, the claim-across-outcome view directly justifies both the Pareto and Pigou-Dalton principles. Or, it helps us see the deep connection between these principles; or, why, a Paretian welfarist should also care about equity in the sense of Pigou-Dalton.

13 Equality and Priority Do prioritarians care about equality? Extensionally, yes. A decomposition theorem: Any SWF that respects Pareto, anonymity, Pigou-Dalton and continuity (with or without separability) can be represented as overall well-being discounted by the degree of inequality. x ≽ M y iff ∑g(w i (x)) ≥ ∑g(w i (y)) iff [1-I g (w 1 (x), …, w N (x))] ∑(w i (x)) ≥ [1-I g (w 1 (y), …, w N (y))] ∑(w i (y)) However, the justification offered here for the prioritarian SWF is based on individuals’ claims-across-outcomes, not a concern for comparative well-being within outcomes. ( Contrast the case in which Temkin-style claims balanced against overall well-being somehow yield a Paretian, Pigou-Dalton, and separable SWF!).

14 Choosing a g(.) function The Atkinson family is attractive: g(w) = (1-γ) -1 w 1-γ γ is an ethical parameter, capturing the “social planner’s” own moral/ethical views. Problematic to try to estimate empirically. Not to be conflated with parameter λ R of individual risk aversion w/r/t consumption for CRRA utility. Leaky bucket question. Poor is at utility level W, while Rich is at level KW. If we reduce Rich’s well-being by ∆w, and increase Poor’s by f∆w, 0<f<1, what is the smallest value of f such that this remains a moral improvement? For given γ, f = (1/K) γ Equalization question. Is it an improvement to move from (W, W*) to (W +, W + ), where W + W* > 2W + ?

15 A contrast with climate change scholarship Assuming a preference-based well-being measure, and the Atkinson g(.), the prioritarian formula for ranking outcomes (modulo zeroing-out) is: E(x) = (1-γ) -1 ∑ i [s(R i )u Ri (c i (x), a i (x)) + t(R i )] 1-γ By contrast, the utilitarian formula often used in climate change scholarship ignores non-market attributes and preference heterogeneity, has a pure time preference (well-being discount factor), and no inequality-aversion prameter E(x) = ∑ i d i u(c i ), with d i > d j if individual j exists later in time than i Even ignoring preference heterogeneity and non-market attributes, and with CRRA consumption utility, the two approaches are different: E(x) = (1-γ) -1 ∑ i [u(c i )] 1-γ vs. E(x) = ∑ i d i u(c i ), with u(c i ) = (1-λ) -1 c i 1-λ Time preference: violates anonymity; an ad hoc way to handle extinction risk; and counterintuitive features of zero discounting can be mitigated, by prioritarians, by increasing the inequality-aversion parameter

16 Utilitarianism and Prioritarianism under Risk Utilitarianism E(a) = ∑ x π a (x) (∑ i w i (x)) = ∑ i ∑ x π a (x) w i (x) “Ex post” prior. E(a) = ∑ x π a (x) (∑ i g(w i (x))) = ∑ i ∑ x π a (x) g(w i (x)) “Ex ante” prior. E(a) = ∑ i g ( ∑ x π a (x) w i (x) ) Utilitarianism and ex post prioritarianism apply expected-utility theory at the level of ethical choice: maximizing expected ethical value (i.e., sum of utilities or transformed utilities). Ex ante prioritarianism does not. “Transformed” approaches: ∑ x π a (x) H(∑ i w i (x)) or ∑ x π a (x) H(∑ i g(w i (x))). Problematic w/r/t separability

17 Util. vs. EPP vs. EAP Policy aPolicy b x y Exp. wbzzzExp. wb Jim703050 5060 55 June3070 50504045 Note: The outcomes are equiprobable, i.e., π a (x) = π a (y) = 1/2, and π b (z) = π b (zz) = 1/2. Utilitarianism is indifferent between the policies. Ex post prioritarianism prefers policy b. Ex ante prioritarianism prefers policy a.

18 Which is most attractive? Stochastic Dominance Sure Thing Time Consistent Pigou Dalton Ex Ante Pareto Util. Yes No Yes EPP Yes No EAP No Yes For those who find axioms of stochastic dominance, sure thing, and time consistency to be normatively compelling— anyone attracted to expected utility theory! —EAP will seem highly problematic. The choice is, for them, between utilitarianism and EPP.

19 Illustrating the axioms Policy aPolicy b xy Exp. wb z zzExp. wb Jim109050 50-ε 50-ε 50-ε June9010 50 50-ε 50-ε 50-ε Illustrates stochastic dominance and ex ante Pareto for three approaches Policy aPolicy b xy Exp. wb zyEU Jim805065 90 50 70 June2050 35 10 50 30 Policy a*Policy b* xy* Exp. wb zy*Exp. wb Jim801045 90 10 50 June20200 110 10 200 105 Ilustrates sure thing principle for three approaches

20 Time Inconsistency A simple example. Assume 2 generations. The news about the effect of climate change on the second generation can either be “good” or “bad.” At an initial point, the decisionmaker assigns each equal probability. She then learns and, if “bad” news, can mitigate the bad effects (at some cost to the current generation). If “good,” the well-being of the two generations is (200, 300). With “bad” and no mitigation it’s (200, 100); with “bad” and mitigation it’s (140, 140). Assume a prioritarian sufficiently inequality averse to prefer the outcome (140, 140) to (200, 100). At the initial point, the expected well-being of the generations, with a plan not to mitigate upon learning the news, is (200, 200); with a plan to mitigate, it’s (170, 220). Thus EAP initially plans not to mitigate – but then, if bad news is learned, prefers to mitigate. By contrast, Util consistently plans not to mitigate if bad news, and EPP consistently plan to mitigate if bad news.

21 EPP vs. Utilitarianism: Climate Policy The extent to which EPP (ex post prioritarianism) and utilitarianism diverge substantially with respect to climate policy, given the empirics of our climate system, global economy and population, etc., is an open question, on which research is now needed. What follows are possible differences …

22 Climate Change: SCC U vs. SCC EPP SCC U SCC EPP c is per capita (normalized) consumption. Equality within each generation. P is an economic/climate path. e s is emissions at time s. Current consumption c 1 is numeraire. Note that SCC EPP > < SCC U depending on the relative likelihood of paths where future generations are worse off vs. better off compared to the present

23 Optimal Abatement A toy model. 2 periods; resources in first period either consumed or invested at rate r; CRRA utility. With intragenerational equality, utilitarian optimum is c 2 = c 1 (1+r) 1/λ. Prioritarian optimum is c 2 = c 1 (1+r) 1/[λ+γ(1-λ)]. Assuming λ < 1 (to avoid negative utilities), less is invested for future and less inequality between generations. As γ approaches infinity, the optimal allocation approaches equality between the 2 generations With π = fraction held by worse off in second generation and (to simplify) linear utility, the utilitarian invests everything. The prioritarian optimum is c 2 = (c 1 /2)(1+r) 1/γ [π 1-y +(1-π) 1-γ ] 1/γ. Subtle interactions between π and γ, i.e., between future inequality and inequality aversion. Ratio c 2 /c 1 is increasing in π for γ 1

24 Other Climate Change Implications “Catastrophe”: Let W L be a specified “low” level of well-being. A policy will reduce the risk of generations falling below W L, at some cost to above- W L generations. EPP is willing to incur greater such costs than Utilitarianism. Decreasing Risk: A policy reduces the downside risk for a given generation of being badly off, but also their upside prospects, so that expected well-being remains the same. EPP prefers the policy, while Utilitarianism is indifferent.

25 Deep Uncertainty How to model this is a burgeoning area of research in economics and decision theory (“ambiguity”). Many approaches now use a set of probabilities, e.g., Gilboa/Schmeidler maximin EU, or Klibanoff et al. second- order risk aversion. It would seem that such approaches are equally applicable to Util. and EPP, simply changing the social maximand. Klibanoff et al., with E(x) either sum of utilities for Util. or sum of transformed utilities for EPP.

26 Variable population Average utilitarianism/prioritarianism : Non-separable, violates “negative expansion” principle Total utilitarianism/prioritarianism : “Repugnant conclusion” Critical level utilitarianism/prioritarianism : With critical level above level of life just worth living/equally good as nonexistence, this approach is separable, satisfies “negative expansion” principle, and no repugnant conclusion. E(x) =∑ i=1 to N(x) (w i (x) – w*) or E(x) = ∑ i=1 to N(x) (g(w i (x)) – g(w*)) with w* the critical level of well-being (above a life just worth living) Generalizes straightforwardly using EPP or utilitarian formula to choice under risk E(a) =∑ x π a (x) (∑ I to N(x) w i (x) – w*) or E(a)=∑ x π a (x) (∑ I to N(x) g(w i (x)) – g(w*))

27 Scaling the Well-Being Function While a well-being function w(c, a, R) unique up to a positive affine transformation—which provides intra- and interpersonal comparisons of levels and differences—is sufficient for utilitarianism, this is not true for prioritarianism. In particular, the Atkinson SWF requires a w(.) unique up to a ratio transformation. OriginalRenumbering 1Renumbering 2Renumbering 3 x y Diff.x y Diff.x y Diff. x y Diff. Jim10 11 1200 220 20 2 12 1030 33 3 Sue 30 25 −560 50 −10202 152 −50 90 75 −15 Sum40 36260 270204 164120 108 Renumbering 1 is individual-specific ratio transformation (preserves only intrapersonal info). Renumbering 2 is common affine transformation (preserves intra- and interpersonal levels and differences). Renumbering 3 is common ratio transform. Utilitarian SWF invariant to Renumberings 2 and 3, Atkinson only to 3.

28 Scaling the Well-Being Function We construct w(.) by identifying a “zero” history (c zero, a zero, R zero ) w(c, a, R) = [s(R)u R (c, a) + t(R)] − [s(R zero ) u R-zero (c zero,a zero ) + t(R zero )] By construction, w(c zero, a zero, R zero ) = 0. With the Atkinson SWF, the zero history (c zero, a zero, R zero ) is both the “horizon of ethical assessment” and the point of absolute ethical priority. The Atkinson SWF is “badly behaved” if anyone has a negative w value in any outcome. Moreover, for any “positive” history such that w(c, a, R) > 0, the ratio between the ethical impact of adding an increment of well-being ∆w to the zero history, and the ethical impact of adding ∆w to the positive history, becomes infinite. ( c zero, a zero, R zero ) should not be conflated with the level of a life worth living (c worth, a worth, R worth ) nor with the critical level history (c crit, a crit, R crit ). Relation to Weitzman’s “dismal theorem”

29 Significance of choice of zero history Zero bundle at (c sub, h death ): w(c,h) = log(c)m(h) – log(c sub )m(h death ). Zero bundle at (c sub, h worst ): w(c,h) = log(c)m(h) – log(c sub )m(h worst ) Marginal moral impact of $1 for person with $100,000 and health h w(100,000, h) -γ m(h)(1/100,000) = (11.51m(h) –log(c sub )m(h death )) -γ x m(h)(1/100,000) w(100,000, h) -γ m(h)(1/100,000) = (11.51m(h) –log(c sub )m(h worst )) -γ x m(h)(1/100,000) Marginal moral impact of $1 for person with $20,000 and health h w(20,000, h) -γ m(h)(1/100,000) = (4.30m(h) –log(c sub )m(h death )) -γ x m(h)(1/20,000) w(20,000, h) -γ m(h)(1/100,000) = (4.30m(h) –log(c sub )m(h worst )) -γ x m(h)(1/20,000) Comparative marginal moral impact of $1


Download ppt "Prioritarianism and Climate Change Matthew Adler, Duke University LSE, MSU Workshop June, 2015."

Similar presentations


Ads by Google