W HAT D OES I T M EAN TO C REATE A S ELF ? Mark R. Waser Digital Wisdom Institute

Slides:



Advertisements
Similar presentations
THE wood badge ticket ne-ii-177 staff development 2
Advertisements

Ethics and Social Responsibility
Addition Facts
Non-Consequentialism
The Subject-Matter of Ethics
Chapter 1: What is Economics? Section 1
A RISTOTLE V IRTUE E THICS : Happiness and the Good Life.
Ethical Decision Making & Information Technology
World of Work. Who you are Environment Environment –surroundings Heredity Heredity –You take after your parents Culture Culture –Way of life in society.
Personhood. Debate Cigarette smoking should be banned in public areas Support:Oppose: FishIda JuliusLok Kit.
Safety and health at work is everyone’s concern. It’s good for you. It’s good for business. Online interactive Risk Assessment Advisory Committee for Safety.
Addition 1’s to 20.
Concepts of Values for Public Relations Bruno Amaral Online PR Consultant.
Reasons for Book 1. Rationality is narrow conception of how humans behave. Paradox and ambiguity are constant. Politics helps us view things from multiple.
Week 1.
Morality As Overcoming Self-Interest
Introduction to Recursion and Recursive Algorithms
Week 10 Generalised functions, or distributions
Authority and Democracy Self-Determination. Analogy individual autonomy – state autonomy Christian Wolff: “Nations are regarded as individuals free persons.
Ethics of Foreign Policy How can we judge our leaders’ actions?
Kant Are there absolute moral laws that we have to follow regardless of consequences? First we want to know what Kant has to say about what moral rule.
B OOTSTRAPPING A S TRUCTURED S ELF -I MPROVING & S AFE A UTOPOIETIC S ELF Mark R. Waser Digital Wisdom Institute
Moral Reasoning Making appropriate use of facts and opinions to decide the right thing to do Quotations from Jacob Needleman’s The American Soul A Crucial.
Moral law and Kant’s imperatives.
IS 376 Ethics and Convergence
Kant’s Ethical Theory.
SESSION-4: RESPECTING OTHERS AS HUMAN BEINGS. What is “respect”? Respect has great importance in everyday life Belief: all people are worthy of respect.
Immanuel Kant The Good Will and Autonomy. Context for Kant Groundwork for Metaphysics of Morals after American Revolution and Before French- rights.
Moral Doctrines and Moral Theories Vice and Virtue in Everyday Life Chapter 4.
Ethics and Morality Theory Part 2 11 September 2006.
Ethics and ethical systems 12 January
The Ethics of Duty and Rights The Ethics of Duty More than any other philosopher, Kant emphasized the way in which the moral life was centered on duty.
Ethics How do we judge what’s right and wrong? Where do we derive our ethics? Ans. Religion, law, inner voice?, ethical theories such as Kantism, Utilitarianism,
The Importance of Architecture for Achieving Human-level AI John Laird University of Michigan June 17, th Soar Workshop
THE PRINCIPLE OF UTILITY: Bentham
John Locke ( ) An English philosopher of the Enlightenment “Natural rights” philosophy.
Consequentialism, Natural Law Theory, Kantian Moral Theory
© Michael Lacewing Three theories of ethics Michael Lacewing
THEORIES ABOUT RIGHT ACTION (ETHICAL THEORIES)
Wisdom DOES Imply Benevolence Mark R. Waser. Super-Intelligence  Ethics (except in a very small number of low-probability edge cases) So... What’s the.
Philosophy 148 Moral Arguments. The first of many distinctions: Descriptive (what the text calls ‘non-moral’) versus Normative (what the text calls ‘moral’)
Mark R. Waser Digital Wisdom Institute
CSE3PE: Professional Environment Introduction to Ethical Theory.
Safely Crowd-Sourcing Critical Mass for a Self-Improving Human-Level Learner (”Seed AI”) Mark R. Waser Digital Wisdom.
A Game-Theoretically Optimal Basis For Safe and Ethical Intelligence: Mark R. Waser A Thanksgiving.
READING #1: “What This Book is About” Chapter One from The Ethics of Teaching.
Autonomy and Artificiality Margaret A. Boden Hojin Youn.
NPS CISREthics1 William Hugh Murray October 10, 2001.
S ELFISHNESS, I NTERDEPENDENCE AND THE A LGORITHMIC E XECUTION OF E NTITY -D ERIVED I NTENTIONS (& PRIORITIES) Mark Waser Digital Wisdom Institute
Ethics and Morality Theory Part 3 30 January 2008.
ETHICALETHICALETHICALETHICAL PRINCIPLESPRINCIPLESPRINCIPLESPRINCIPLES.
An act is moral if it brings more good consequences than bad ones. What is the action to be evaluated? What would be the good consequences? How certain.
Personhood.  What is a person?  Why does it matter?  “Human” rights: do you have to be human to deserve human rights?  Restricted rights? Rights of.
Theory of Consequences and Intentions There are two traditional ways of looking at the “rightness” or “wrongness” of an act. 1. Look at the consequences.
© 2013 Cengage Learning. All Rights Reserved. May not be copied, scanned, or duplicated, in whole or in part, except for use as permitted in a license.
Kantian Ethics Good actions have intrinsic value; actions are good if and only if they follow from a moral law that can be universalized.
Social Ethics continued Immanuel Kant John Rawls.
Ethical theories and approaches in Business
Ensuring Safe AI via a Moral Emotion Motivational & Control System
Introduction to Ethics Lecture 12 Kant
Machine Execution OF HUMAN Intentions
From Stockholder to a Stakeholder Theory
Theory of Formalism.
Ethical Constraints in the Development of Artificial Intelligence
Moral Reasoning  Ethical dilemmas in management are not simple choices between “right” and “wrong”.They are complex judgments on the balance between economic.
Moral Reasoning  Ethical dilemmas in management are not simple choices between “right” and “wrong”.They are complex judgments on the balance between economic.
Moral Reasoning  Ethical dilemmas in management are not simple choices between “right” and “wrong”.They are complex judgments on the balance between economic.
Kant’s view on animals is ‘anthropocentric’ in that it is based on a sharp distinction between humans and non-human animals. According to Kant, only.
Moral Reasoning  Ethical dilemmas in management are not simple choices between “right” and “wrong”.They are complex judgments on the balance between economic.
Ethical concepts and ethical theories Topic 3
Presentation transcript:

W HAT D OES I T M EAN TO C REATE A S ELF ? Mark R. Waser Digital Wisdom Institute

O VERVIEW 1.What is a “self”? 2.Why a “self” 3.Unpacking “morality” 4.Call for collaborators 2

S ELF IS A S UITCASE W ORD the mere fact of being self-referential causes a self, a soul, a consciousness, an “I” to arise out of mere matter Douglas Hofstadter I Am a Strange Loop 3

S ELF -R EFERENTIALITY Self-referentiality (e.g. the 3-body gravitational problem) leads directly to indeterminacy *even in* deterministic systems Humans consider indeterminacy in behavior to necessarily and sufficiently define an entity rather than an object AND innately tend to do this with the “pathetic fallacy” (ascribing agency to non-agents) Humans then quickly leap to analyze what such a system needs to remain intact and then equally rapidly ascribe “wants” to fulfill those “needs” (values to support those goals).

S ELF

A UTOPOIETIC S YSTEMS 6

W HY A S ELF ? Well, certainly it is the case that all biological systems are: Much more robust to changed circumstances than out our artificial systems. Much quicker to learn or adapt than any of our machine learning algorithms 1 Behave in a way which just simply seems life-like in a way that our robots never do 1 The very term machine learning is unfortunately synonymous with a pernicious form of totally impractical but theoretically sound and elegant classes of algorithms. Perhaps we have all missed some organizing principle of biological systems, or some general truth about them. Brooks, RA (1997) From earwigs to humans Robotics and Autonomous Systems 20(2-4):

AGI I S S TALLED For the purposes of artificial (general) intelligence, selves solve McCarthy & Hayes/ Dennett’s frame problem (context), Harnad’s symbol grounding problem (understanding), Searle’s semantic grounding problem (meaning), and all other problems arising from derived intentionality It’s a fairly obvious pre-requisite for self-improvement. Diversity (wisdom of the crowd/generate-and-test) BONUS: Selves can be held responsible where tools cannot 8

W HY N OT A S ELF -I MPROVING T OOL ? Isn’t that an oxymoron? What happens when an enemy (or even an idiot) gets ahold of it? A human-in-the-loop will ALWAYS slow the process down RARELY be in complete control 9

M ORALITY & E THICS Normative ethics (What one “should” consider ethical) Descriptive ethics (What people *do* consider ethical) Meta-ethics (How what should consider ethical questions) Hume (is-ought problem, the intellect serves the desires) Kant (categorical imperative) Bentham/Mills (utilitarianism) Haidt 10

H AIDT’S F UNCTIONAL A PPROACH TO M ORALITY 11

AI S AFETY There are far too many ignorant claims that: Artificial intelligences are uniquely dangerous The space of possible intelligences is so large that we can’t make any definite statements about AI Selves will be problematical if their intrinsic values differ from our own (with an implication that, for AI, they certainly and/or unpredictably and uncontrollably will be) Selves can be prevented or contained We have already made deeply unsafe choices about non-AI selves that, hopefully, safety research will make obvious (and, more hopefully, cause to be reversed) 12

S ELVES E VOLVE THE S AME G OALS Self-improvement Rationality/integrity Preserve goals/utility function Decrease/prevent fraud/counterfeit utility Survival/self-protection Efficiency (in resource acquisition & use) (adapted from Omohundro 2008 The Basic AI Drives)

U NFRIENDLY AI Without explicit goals to the contrary, AIs are likely to behave like human sociopaths in their pursuit of resources Superintelligence Does Not Imply Benevolence 14

S ELVES E VOLVE THE S AME G OALS Self-improvement Rationality/integrity Preserve goals/utility function Decrease/prevent fraud/counterfeit utility Survival/self-protection Efficiency (in resource acquisition & use) Community = assistance/non-interference through GTO reciprocation (OTfT + AP) Reproduction (adapted from Omohundro 2008 The Basic AI Drives)

R IFFS ON S AFETY & E THICS 1.Ecological Niches & the mutability of self 2.Short-Term vs. Long-Term 3.Efficiency vs. Flexibility/Diversity/Robustness 4.Allegory of the Borg Uniformity is effective! (resistance is futile) Uniformity is AWFUL! (yet everyone resists) 5.Problematical extant autobiographical selves 16

W HAT ’ S THE P LAN ? 1.Self-modeling A.What do we want the self to want? make friends/allies (us!) survival self-improve earn money B.What do we want the self do? 2.Other-modeling (Environment-modeling) A.What can others do for the self? B.What do others want that the self can provide? 17

S OFTWARE O VERHANG AND L OW -H ANGING F RUIT 1.Watson on IBM Bluemix Awesome free functionality EXCEPT for the opportunity cost and the ambient default of silo creation 2.Big Data on Amazon Redshift 3.Microsoft Azure 4.Almost everyone’s AI/AGI/BICA functionality 18

W HAT A RE M Y G OALS ? 1.To make awesomely capable tools available to all. 2.To make those tools easy to use. 3.To create a new type of “self”. A new friend/ally Increase diversity Have a concrete example for ethical/safety research & development 19

T HE S PECIFIC D ETAILS Create first a community (a “corporate” self) and then a machine self to: 1.Provide easy access to the latest awesome tools low-cost instances that can be “spun-up” in the cloud (as much as possible) uniform & easy-to-use interfaces quick-start guides and “notebooks” for use & programming 2.Catalyze development/availability of new tools decompose current tools to allow best-of-breed mix & match an easy-to-program “smart” environment for both combining best-of-breed widgets and creating new ones gamification! 3.Catalyze development of new selves & ethics 20

E THICAL Q&A 1.Do we “owe” this self moral standing? Yes. Absolutely. 2.To what degree? By level of selfhood & By amount of harm/aversion (violation of autonomy) 3.Does this mean we can’t turn it off? No. It doesn’t care + prohibition is contra-self. 4.Can we experiment on it? It depends