Presentation is loading. Please wait.

Presentation is loading. Please wait.

For Ethical Issues in Science and Technology J. Blackmon.

Similar presentations


Presentation on theme: "For Ethical Issues in Science and Technology J. Blackmon."— Presentation transcript:

1 For Ethical Issues in Science and Technology J. Blackmon

2  Select Themes in Ethics of Technology  Ethical Theory and Terminology  Conclusion

3

4  Themes in Ethics of Technology  Innovation Presumption: Liberty, Optimism, and Inevitability  Situated Technologies  Lifecycle  Power  Forms of Life  Extrinsic Concerns  Intrinsic Concerns  Responsible Development  Ethics and Public Policy  Framework for Ethical Analysis of Emerging Technologies

5  Themes in Ethics of Technology  Innovation Presumption: Liberty, Optimism, and Inevitability  Situated Technologies  Lifecycle  Power  Forms of Life  Extrinsic Concerns  Intrinsic Concerns  Responsible Development  Ethics and Public Policy  Framework for Ethical Analysis of Emerging Technologies

6  Themes in Ethics of Technology  Innovation Presumption: Liberty, Optimism, and Inevitability  Situated Technologies  Lifecycle  Power  Forms of Life  Extrinsic Concerns  Intrinsic Concerns  Responsible Development  Ethics and Public Policy  Framework for Ethical Analysis of Emerging Technologies

7  Why does the ethical evaluation of technology tend to focus on what might be problematic about it?  Are we all luddites?

8  Innovation Presumption: We should invent, adopt, and use new technologies.

9  Innovation Presumption + Unknown Consequences → Warranted Scrutiny

10  Innovation Presumption: We should invent, adopt, and use new technologies.  The popularity of the Innovation Presumption + Unknown Consequences → Warranted Scrutiny

11  And it’s a win either way: Scrutiny allows us to avoid paying the negative consequences where the IP turns out to have been false. And should it turn out to be true in other cases, scrutiny may reveal why it’s true.  So, we advocate scrutiny of the IP without adopting an unfair bias against it.

12 Three Principles Supporting the IP 1. Liberty: We should be free to innovate so long as it isn’t harmful to others. (J. S. Mill’s Harm Principle) 2. Technological Optimism: Technology improves and will continue to improve human lives. 3. Technological Determinism: Technological advancement is inevitable.

13 Three Principles Supporting the IP 1. Liberty: We should be free to innovate so long as it isn’t harmful to others. (J. S. Mill’s Harm Principle) 2. Technological Optimism: Technology improves and will continue to improve human lives. 3. Technological Determinism: Technological advancement is inevitable.

14 Liberty: We should be free to innovate so long as it isn’t harmful to others. Harm Principle: The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.

15 Liberty: We should be free to innovate so long as it isn’t harmful to others. Much remains to be said!  What is harm?  Who are the others?  What kind of power, and how much?

16 Liberty: We should be free to innovate so long as it isn’t harmful to others. Much remains to be said!  What is harm?  Who are the others?  What kind of power, and how much?

17 Cindy cuts Jake  She’s a trained surgeon performing an emergency tracheotomy with no anesthesia.  Cindy has caused physical damage and pain to Jake. She may even have done this against his will.  Has Cindy caused harm to Jake?

18 Cindy cuts Jake  She’s a trained surgeon performing an emergency tracheotomy with no anesthesia.  Cindy has caused physical damage and pain to Jake. She may even have done this against his will.  Has Cindy caused harm to Jake?

19 Cindy cuts Jake  So perhaps whether Cindy harms Jake depends on whether she saves him from further/worse harm.

20 Cindy cuts Jake  So perhaps whether Cindy harms Jake depends on whether she saves him from further/worse harm.  But then whether harm is being committed depends on what is yet to come.

21 Cindy cuts Jake  So perhaps whether Cindy harms Jake depends on whether she saves him from further/worse harm.  But then whether harm is being committed depends on what is yet to come.  This is a notorious problem: How far into the future should we look when assessing consequences?

22  How far into the future should we look when assessing consequences?  Even if we somehow decided, should this view of harm as something determined by future outcomes be defended, or should we look for a more plausible/satisfying alternative?

23 The Dukes of Hazzard  Bo and Luke Duke drive with extreme recklessness all over town, never hitting a single person due largely to luck.  Have they harmed anyone?

24 Liberty: We should be free to innovate so long as it isn’t harmful to others. Much remains to be said!  What is harm?  Who are the others?  What kind of power, and how much?

25 The Chalk Checker  Ida parks in a 1-hour parking space for more than an hour when other 1-hour spaces are left unoccupied. She gets a ticket.

26 The Chalk Checker  In court, Ida cites the great philosopher J. S. Mill, saying, “The only purpose for which power can be rightfully exercised over any member of a civilized community, against her will, is to prevent harm to others. But fining me does not prevent harm to anyone, not even if we count mere inconvenience as a harm. After all, no one was harmed, not even inconvenienced.”

27 The Chalk Checker  In court, Ida cites the great philosopher J. S. Mill, saying, “The only purpose for which power can be rightfully exercised over any member of a civilized community, against her will, is to prevent harm to others. But fining me does not prevent harm to anyone, not even if we count mere inconvenience as a harm. After all, no one was harmed, not even inconvenienced.”

28 The Chalk Checker  Is Ida right? Which “other” has been harmed? And if she is right, does the city have a case against her?  If the city does have a case against her, and yet no one has been harmed, then is J. S. Mill is wrong?

29 Non-existing Future and Possible Persons  Recall the commentator on Krasny’s Forum show: There are no passenger pigeons wanting to be brought back into existence.

30 Non-existing Future and Possible Persons  The problem generalizes.

31 Test Cases  Cindy the Surgeon  The Dukes of Hazzard  The Chalk Checker  Non-Existing Future and Possible Pigeons (Persons) These cases test the Harm Principle.

32 Test Cases  Cindy the Surgeon  The Dukes of Hazzard  The Chalk Checker  Non-Existing Future and Possible Pigeons (Persons) Harm Principle: The only purpose for which power can be rightfully exercised over any member of a civilized community, against his will, is to prevent harm to others.

33 Questions  What is harm?  Who are the others?  What kind of power?

34 What kind of power?  In many countries, the state has intervened in the sale and distribution of cigarettes— ostensibly for the good of the people.  State Paternalism: In some cases, the state should limit the behavior of the people for their own good.

35 What kind of power?  Hard Paternalism: Restricting activity for the good of the person regardless of whether they are informed and consenting.  Soft Paternalism: Restricting activity for the good of the person until that person meets certain standards for voluntary action.

36 What kind of power?  Thus, the Harm Principle and the Liberty Principle raise questions about the exercise of power.  When power/intervention is called for, how should it be used?

37 What kind of power?  Laws use coercion.  Financial incentives  Self-regulation

38  Liberty: We should be free to innovate so long as it isn’t harmful to others.  Harm Principle: The only purpose for which power can be rightfully exercised over any member of a civilized community, against his/her will, is to prevent harm to others.  While these principles may be attractive and compelling sentiments, they do not in themselves resolve important questions about harm, others, and power.

39

40  Types of Value  Instrumental Value: the value of something as a means to an end.  Intrinsic/Final Value: the value of something for what it is or as an end.

41  Types of Value  Instrumental Value: the value of something as a means to an end.  Intrinsic/Final Value: the value of something for what it is or as an end.  Subjective  Objective

42  Types of Value  Instrumental Value: the value of something as a means to an end.  Intrinsic/Final Value: the value of something for what it is or as an end.  Subjective  Objective

43  Instrumental Value: the value of something as a means to an end.

44 The hammer has instrumental value because it does something valuable: pound nails.

45 Pounding nails gets value from easily and securely fastening pieces of wood together gets value from building safe comfortable structures gets value from Health, comfort, general happiness.

46  Instrumental Value: the value of something as a means to an end.  Instrumental value entails some non- instrumental value.

47  Types of Value  Instrumental Value: the value of something as a means to an end.  Intrinsic/Final Value: the value of something for what it is or as an end.  Subjective  Objective

48  Intrinsic/Final Value: the value of something for it is or as an end.  Subjective: The value depends on how or whether we value it.  Objective: The value is independent of how or whether we value it.

49  Subjective Final Value: Valued by us as an end in itself  Examples of things often thought to have subjective final value:  Works of art  Landscapes  Mementos  Religious artifacts  Historical sites

50  Objective Final Value: Valuable independent of whether it’s valued by us as an end in itself  Examples of things often thought to have objective final value:  Human beings  Life  Typically, technology is not thought to have objective final value.  But of course, it affects things which do have it.

51 Types of Theories  Example: Animal Testing Suppose that developing a novel medical technology requires extensive animal testing which would cause considerable and persistent pain and suffering. Suppose also that these animals are morally considerable—their suffering matters in a moral sense. Should the testing go forward?

52 Types of Theories  One Response: It depends on the balance of the good and bad outcomes that would result (or are expected to result) from the testing. If the outcome is likely to be overall good, then Yes; if the outcome is likely to be overall bad, then No.

53 Types of Theories  Another Response: No. It is wrong to intentionally cause harm to animals.

54 Types of Theories  Another Response: Yes. It is permissible to use (nonhuman) animals in the service of humans.

55 Types of Theories  It depends on the balance of the good and bad outcomes.  No.  Yes.

56 Types of Theories  It depends on the balance of the good and bad outcomes. --Consequentialist  No. --Deontological  Yes. --Deontological

57 Types of Theories  Consequentialist Normative Theory: One ought to take the action which will result in the best outcome.  Deontological Normative Theory: One ought to do something if it conforms to an operative rule.

58 Types of Theories  Consequentialist Normative Theory: One ought to take the action which will result in the best outcome.  Deontological Normative Theory: One ought to do something if it conforms to an operative rule.  Virtue Theory: One ought to do whatever is an instance of virtue: compassion, courage honesty, etc.

59 Types of Theories  Consequentialist Normative Theory: One ought to take the action which will result in the best outcome.  Deontological Normative Theory: One ought to do something if it conforms to an operative rule.  Virtue Theory: One ought to do whatever is an instance of virtue: compassion, courage, honesty, etc.

60 Consequentialist Normative Theory  Utilitarianism: One ought to take the action which will maximize utility.

61 Utilitarianism: One ought to take the action which will maximize utility.

62  What is utility ?

63 Utilitarianism: One ought to take the action which will maximize utility.  What is utility ?  Pleasure (Jeremy Bentham)  Happiness (J. S. Mill)

64 Utilitarianism: One ought to take the action which will maximize utility.  What is utility ?  Pleasure (Jeremy Bentham)  Happiness (J. S. Mill)  Maximizing pleasure or happiness?

65 Isn’t this just what they call hedonism?

66 Not really. For at least two reasons.

67 First, there are versions of utilitarianism which go beyond base pleasure.  What is utility ?  Pleasure (Jeremy Bentham)  Happiness, and greater and lesser kinds of happiness (J. S. Mill)  Preference Satisfaction?  Well-being or welfare?  Lack of pain and suffering?

68 First, there are versions of utilitarianism which go beyond base pleasure.  What is utility ?  Pleasure (Jeremy Bentham)  Happiness, and greater and lesser kinds of happiness (J. S. Mill)  Preference Satisfaction?  Well-being or welfare?  Lack of pain and suffering?

69 “It is better to be a human being dissatisfied than a pig satisfied; better to be Socrates dissatisfied than a fool satisfied.” J. S. Mill There are different kinds of happiness: higher quality and lower quality.

70 Second, it depends on what you mean by hedonism. Is utilitarianism a kind of hedonism?  Yes, partly, by the original philosophical meaning.  No, not at all, by a popular conception.

71  Philosophical/Historical Definition: Hedonism is the view that the only intrinsic (non-instrumental) value is pleasure.  A Common Definition: Hedonism is the pursuit of pleasure; sensual self-indulgence; self-gratification.

72  Utilitarianism is emphatically not about maximizing one’s own happiness. That would be ethical egoism.  Goal of Utilitarianism: The greatest good for the greatest number.  And this can entail great personal sacrifice, which is entirely antithetical to ethical egoism.

73 A good utilitarian: “The needs of the many outweigh the needs of the few, or the one.” Dr. Spock, Star Trek II The Wrath of Khan

74 A good utilitarian: “The needs of the many outweigh the needs of the few, or the one.” Dr. Spock, Star Trek II The Wrath of Khan Which many?

75 A good utilitarian: “The needs of the many outweigh the needs of the few, or the one.” Dr. Spock, Star Trek II The Wrath of Khan Which many? What needs?

76 Utilitarianism: One ought to take the action which will maximize utility.  What counts as utility?  What is it to maximize it?

77 From Ian Barbour’s ‘Philosophy and Human Values’

78 As Ian Barbour points out, if we are only concerned with the greatest good for the greatest number, certain questions and problems arise.

79  Who matters?

80 As Ian Barbour points out, if we are only concerned with the greatest good for the greatest number, certain questions and problems arise.  Who matters?  How do we quantify and measure utility? And if there are different kinds, how do we compare them?

81 As Ian Barbour points out, if we are only concerned with the greatest good for the greatest number, certain questions and problems arise.  Who matters?  How do we quantify and measure utility? And if there are different kinds, how do we compare them?  Total utility is blind to distribution of utility; thus, there is nothing inherently wrong with disparities of justice and equality.

82

83 Bill Gates walks into a bar… The total income of this group is $505,000, leading to an average income of $50,500.

84 Bill Gates walks into a bar… Gates walks in with his $1 billion income. The total income of the group shoots way up to over $1 billion. Thus the mean income is over 0.1 billion dollars or $100 million.

85 Who Matters?  Most forms of utilitarianism are anthropocentric (human-centered). But increasingly the “circle of moral consideration” has expanded to include non-human animals.  This raises questions, however, about which animals are included.  This in turn raises the question of what makes an entity worthy of moral consideration.

86 Who Matters?  Also, do we include the happiness of only living beings?  Or do we include the happiness of those who are yet to come?

87 Who Matters?  If the latter, that is if we include the happiness of those who are yet to come, how exactly do we do this?  How far into the future do we have to look?  And what we do now not only determines “happiness levels”, but it also determines who will end up coming into existence.  “My parents should have waited until they were better off financially before having me!”

88 Who Matters?  As Barbour points out, if sum total happiness is the goal, then this might be achieved just by having a great number of moderately happy people in the world.  Worse, they could no more that just tolerable well off, so long as there is enough of them.  Derek Parfit calls this this the “repugnant conclusion”: This would not be a good world.

89 How do we quantify the greatest good?  Is there just one thing, happiness (or pleasure, or whatever), that can be measured on a single numerical scale?  It would appear not. In fact, it appears that we have different kinds of goods.

90 How do we quantify the greatest good?  Is there just one thing, happiness (or pleasure, or whatever), that can be measured on a single numerical scale?  It would appear not. In fact, it appears that we have different kinds of goods.  If so, then what if they are incommensurable?

91 Total utility is blind to distribution.  Suppose the extermination of a small minority would make the majority so happy that the total happiness it increased.  Suppose total national income can be increased if we accept great poverty for one segment of society.

92 Total utility is blind to distribution.  Suppose the extermination of a small minority would make the majority so happy that the total happiness it increased.  Suppose total national income can be increased if we accept great poverty for one segment of society.  According to utilitarianism, there is nothing inherently wrong here.

93 Total utility is blind to distribution.  This blindness to distribution results in a system that is unjust.  Unless we are prepared to abandon justice, utilitarianism without justice must be rejected.  We might reject utilitarianism altogether, or we might supplement it with a principle of justice.

94 Utility and Justice  If the total good were the only criterion, then we could justify a small social gain even if it entailed a gross injustice.  But if justice were the only norm, then we would have to correct a small injustice even if it resulted in widespread suffering or social harm.  Apparently, we must consider both justice and the total good.


Download ppt "For Ethical Issues in Science and Technology J. Blackmon."

Similar presentations


Ads by Google