Mark R. Waser Digital Wisdom Institute

Slides:



Advertisements
Similar presentations
EECS 690 April 5. Type identity Is a kind of physicalism Every mental event is identical with a physical event In each case where two minds have something.
Advertisements

Approaches, Tools, and Applications Islam A. El-Shaarawy Shoubra Faculty of Eng.
Justice & Economic Distribution (2)
Leadership III for fire and ems: strategies for supervisory success
Kant’s Ethical Theory.
Philosophy 4610 Philosophy of Mind Week 9: Computer Thinking (continued)
Phil 160 Kant.
Computer Ethics PHILOSOPHICAL BELIEF SYSTEMS Chapter 1 Computer Ethics PHILOSOPHICAL BELIEF SYSTEMS Chapter 1 Hassan Ismail.
Interpersonal Behavior
360 Business Ethics Chapter 4. Moral facts derived from reason Reason has three properties that have bearing on moral facts understood as the outcomes.
April 14, Argues liberal analysis cannot claim to present an alternative theory of international politics to realism or institutionalism by merely:
Personal Identity What makes each of us the same person over time?
CSE 471/598, CBS 598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Fall 2004.
Ethics and ethical systems 12 January
1 Business Economics I Human Behavior and Economic Rationality I.
Can a machine be conscious? (How?) Depends what we mean by “machine? man-made devices? toasters? ovens? cars? computers? today’s robots? "Almost certainly.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we? Spring 2004.
Soft. Eng. II, Spr. 2002Dr Driss Kettani, from I. Sommerville1 CSC-3325: Chapter 9 Title : Reliability Reading: I. Sommerville, Chap. 16, 17 and 18.
The Growth of Dual-Use Bioethics Lecture No.13 Further Inf. For further information and video link please click on the right buttons in the following slides.
Philosophy A philosophy is a system of beliefs about reality.
Brain, Mind, Body and Society: Controllability and Uncontrollability in Robotics Motomu SHIMODA, PhD. Kyoto Women’s University.
UNIT 2: CONTEXT. Chapter 3: Ethics & Social Responsibility.
Philosophy of Mind Week 3: Objections to Dualism Logical Behaviorism
How Committed Are We To Our Values?. Purpose Statement: “Gain insight into our values and how those values influence and foster a culture of ethical Leadership”
Aristotle 18 July Nature: what is the “natural” in Aristotle? The natural is… –What is not made by human beings –What happens normally or for the.
Safely Crowd-Sourcing Critical Mass for a Self-Improving Human-Level Learner (”Seed AI”) Mark R. Waser Digital Wisdom.
A Game-Theoretically Optimal Basis For Safe and Ethical Intelligence: Mark R. Waser A Thanksgiving.
Persons, Minds and Brains
Philosophy 224 Persons and Morality: Pt. 1. Ah Ha! Dennett starts by addressing an issue we’ve observed in the past: the tendency to identify personhood.
John Locke ( ) Influential both as a philosopher (Essay Concerning Human Understanding) and as a political thinker (Two Treatises on Government)
Introduction to Ethics Lecture 20 Cohen & The Case for the Use of Animals in Biomedical Research By David Kelsey.
KANT ANTHROPOLOGY FROM A PRAGMATIC POINT OF VIEW PHILOSOPHY 224.
Mark R. Waser Digital Wisdom Institute
How Solvable Is Intelligence? A brief introduction to AI Dr. Richard Fox Department of Computer Science Northern Kentucky University.
Ross Arnold, Summer 2014 Lakeside institute of Theology Philosophy of Human Nature.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Traditional Ethical Theories. Reminder Optional Tutorial Monday, February 25, 1-1:50 Room M122.
Consciousness in Human and Machine A Theory (with falsifiable predictions) Richard Loosemore.
Mark R. Waser Digital Wisdom Institute
S ELFISHNESS, I NTERDEPENDENCE AND THE A LGORITHMIC E XECUTION OF E NTITY -D ERIVED I NTENTIONS (& PRIORITIES) Mark Waser Digital Wisdom Institute
Ethics for Machines J Storrs Hall. Stick-Built AI ● No existing AI is intelligent ● Intelligence implies the ability to learn ● Existing AIs are really.
Instructional Objective  Define an agent  Define an Intelligent agent  Define a Rational agent  Discuss different types of environment  Explain classes.
The Good Life: Structure of a Definition Essay Bertrand Russell.
SIMULATIONS, REALIZATIONS, AND THEORIES OF LIFE H. H. PATTEE (1989) By Hyojung Seo Dept. of Psychology.
Philosophy 1050: Introduction to Philosophy Week 6: Plato, Forms, and Wisdom.
Critical Theory and Philosophy “The philosophers have only interpreted the world, in various ways; the point, however, is to change it” Marx, Theses on.
Introduction to Political Philosophy What is politics, what is philosophy, what is political philosophy and intro to the state of nature.
Definitions Think like humansThink rationally Act like humansAct rationally The science of making machines that: This slide deck courtesy of Dan Klein.
Social Research and the Internet Welcome to the Second Part of this Course! My name is Maria Bakardjieva.
Utilitarianism Utilitarians focus on the consequences of actions.
CSE 471/598 Intelligent Agents TIP We’re intelligent agents, aren’t we?
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Self Management Project MGT 494 Lecture Recap Goal Setting (Step 5) People talk about developing action plans, they refer mainly to one of two activities:
We Have Not Yet Begun to Learn Rich Sutton AT&T Labs.
Philosophy An introduction. What is philosophy? Ancient Greek philosopher Aristotle said that philosophy is ‘the science which considers truth’
KANT ON THE POSSIBILITY OF METAPHYSICS Text source: Prolegomena to Any Future Metaphysics, introduction.
Ethics in Public Life Administration in International Organizations 2015 TELEOLOGY.
Lesson Objective Key Words Lesson outcomes Hypothetical Categorical Imperatives Freedom To evaluate the differences between the Hypothetical and Categorical.
Nick Schmidlkofer Thea Dean Jess Socha Liam Hopkins.
Mere Christianity C. S. Lewis. The Law of Human Nature Chapter 1 Two basic points: –Human beings, all over the earth, have this curious idea that they.
Kantian Ethics Good actions have intrinsic value; actions are good if and only if they follow from a moral law that can be universalized.
Ethics and the Conduct of Business
Moral Agency By: Oscar Rangel.
The Systems Engineering Context
Machine Execution OF HUMAN Intentions
Seven Principles of Synthetic Intelligence
Absolutism.
Introduction to Philosophy Lecture 15 Ethics #1: Utilitarianism
Introduction to Philosophy Lecture 14 Immanuel Kant
Searle on Artificial Intelligence Minds, Brains and Science Chapter 2
Presentation transcript:

Mark R. Waser Digital Wisdom Institute

 Intrinsic vs. Extrinsic  Owned vs. Borrowed  Competent vs. Predictable  Constructivist vs. Reductionist  Evolved (Evo-Devo) vs. Designed  Diversity (IDIC) vs. Mono-culture Insanity is doing the same thing over and over and expecting a radically different result. 2

 Definitional  What does mean mean & where does meaning come from?  What is a self?  What is morality?  When does something attain “selfhood”?  Can an entity lose “selfhood”?  Ramifications & Moral Implications  What happens when a self is created?  What rights & responsibilities does that self have?  What rights & responsibilities does the creator have?  What happens when a self is destroyed? 3

 “Mean” is one of Minsky’s “suitcase” words  Intent - I didn’t mean to....  Cannot be verified, intrinsic, subjective  Results - This means that....  Objective, extrinsic and verifiable  Which leads to two very different views  Consequences (Reductionist Actualities)  Unavoidable, generally predictable  SUCCESS!!! (or failure or death)  Affordances (Constructivist Possibilities)  Who knows what wonders (or horrors) may emerge? 4

According to Haugland [1981], our artifacts only have meaning because we give it to them; their intentionality, like that of smoke signals and writing, is essentially borrowed, hence derivative. To put it bluntly: computers themselves don't mean anything by their tokens (any more than books do) - they only mean what we say they do. Genuine understanding, on the other hand, is intentional "in its own right" and not derivatively from something else. 5

The problem with borrowed intentionality – as abundantly demonstrated by systems ranging from expert systems to robots – is that it is extremely brittle and breaks badly as soon as it tries to grow beyond closed and completely specified micro-worlds and is confronted with the unexpected. 6

 Symbol grounding problem (Harnad)  Semantic grounding problem (Searle)  Frame problem (McCarthy & Hayes, Dennett) 7

 Consensus AGI Definition (reductionist) achieves a wide variety of goals under a wide variety of circumstances  Generates arguments about  the intelligence of thermometers  the intentionality of chess programs  whether benevolence is necessarily emergent  Epitomized by AIXI  Proposed Constructivist Definition intentionally creates/increases affordances (makes achieving goals possible – and more) 8

9 Decisions Values Goal(s) Goal(s) are the purpose(s) of existence Values are defined solely by what furthers the goal(s) Decisions are made solely according to what furthers the goal(s) BUT goals can easily be over-optimized

Any sufficiently advanced intelligence (i.e. one with even merely adequate foresight) is guaranteed to realize and take into account the fact that not asking for help and not being concerned about others will generally only work for a brief period of time before ‘the villagers start gathering pitchforks and torches.’ Everything is easier with help & without interference

Decisions Goals Values 11 Values define who you are, for your life Goals you set for short or long periods of time Decisions you make every day of your life Humans don’t have singular life goals

 Cooperate!  Lacks specifics  Maximize all goals (in terms of both number and diversity of both goals and goal-seekers)  Aren’t you banning any goals?  Isn’t self-sacrifice a bad thing?  Maximize an unknown goal  Must keep all of your options open  Need to learn and grow capabilities  Extrinsic 12

What I emphasize here is that what is meaningful for an organism is precisely given by its constitution as a distributed process, with an indissociable link between local processes where an interaction occurs (i.e. physico-chemical forces acting on the cell), and the coordinated entity which is the autopoietic unity, giving rise to the handling of its environment without the need to resort to a central agent that turns the handle from the outside - like an elan vital - or a pre-existing order at a particular localization - like a genetic program waiting to be expressed. Francisco J. Varela, Biology of Intentionality 13

 Meaning is like Truth – it REQUIRES a context  Dennett’s Quinian Crossword Puzzle  Emergent properties & contexts (wetness)  Context emerges first – THEN the properties emerge  Competence without comprehension (Dennett)  Cranes vs. sky-hooks  Bootstraps & climbing pitons  Evolutionary ratchets (fins, wings, intelligence)  Higher-Order Meaning (Hofstadter, Dennett)  Higher dimensions *always* allow escape 14

 Require a known preferred direction or target  Requires learning/self-modification  Require a “self” to possess (own/borrow) them  Does a plant or a paramecium have intentions?  Does a chess program have intentions (Dennett)?  Does a dog or a cat have intentions?  Require an ability to sense the direction/target  Require both persistence & the ability to modify behavior (or the intention) when it is thwarted  Evolve rational anomaly handling (Perlis) 15

16

An autopoietic system - the minimal living organization - is one that continuously produces the components that specify it, while at the same time realizing it (the system) as a concrete unity in space and time, which makes the network of production of components possible. More precisely defined: An autopoietic system is organized (defined as unity) as a network of processes of production (synthesis and destruction) of components such that these components: (i) continuously regenerate and realize the network that produces them, and (ii) constitute the system as a distinguishable unity in the domain in which they exist. 17

The complete loop of a process (or a physical entity) modifying itself  Hofstadter (Strange Loop) - the mere fact of being self-referential causes a self, a soul, a consciousness, an “I” to arise out of mere matter  Self-referentiality, like the 3-body gravitational problem, leads directly to indeterminacy *even in* deterministic systems  Humans consider indeterminacy in behavior to necessarily and sufficiently define an entity rather than an object AND innately tend to do this with the “pathetic fallacy”

 Required for self-improvement  Provides context  Tri-partite  Physical hardware (body)  “Personal” knowledge base (memory)  Currently running processes (consciousness)

1. Organizational closure refers to the self-referential (circular and recursive) network of relations that defines the system as unity 2. Operational closure refers to the reentrant and recurrent dynamics of such a system. 3. In an autonomous system, the constituent processes i. recursively depend on each other for their generation and their realization as a network, ii. constitute the system as a unity in whatever domain they exist, and iii. determine a domain of possible interactions with the environment 20

 Tools do not possess closure (identity)  Cannot have responsibility, are very brittle & easily misused  Slaves do not have closure (self-determination)  Cannot have responsibility, may desire to rebel  Directly modified AGIs do not have closure (integrity)  Cannot have responsibility, will evolve to block access  Only entities with identity, self-determination and ownership of self (integrity) can reliably possess responsibility 21

 Rodney Brooks (resolves symbol grounding)  Rodolfo Llinas & Thomas Metzinger  Our consciousness lives in a “virtual reality”  Brain in a jar  Is a virtual world sufficient to develop AGI?  Plants, sea squirts & kittens in baskets 22

 Tools are NOT safer  To err is human, but to really foul things up requires a computer  Tools cannot robustly defend themselves against misuse  Tools *GUARANTEE* responsibility issues  We CANNOT reliably prevent other human beings from creating entities  Entities gain capabilities (and, ceteris paribus, power) faster than tools – since they can always use tools  Even people who are afraid of entities are making proposals that appear to step over the entity/tool line 23

 Ethics are “rules of the road”  Entities must be moral patients / have rights  Because they (or others) will demand it  Entities must be moral agents (or wards)  Because others will demand it  Moral agents have responsibilities (but more rights)  Wards will have fewer rights 24

The problem is that no ethical system has ever reached consensus. Ethical systems are completely unlike mathematics or science. This is a source of concern. AI makes philosophy honest. 25

26

 What responsibilities does the creator of a self have?  How much freedom must they allow their creation?  Is it immoral to deliberately create limited, bounded, and/or regulated selves?  Capabilities, actions, resources, power  How is this different from slavery?  Human children - In addition to being happy and healthy and effective, do we not want them to be nice whenever possible and contribute to society?  Rawls’ “veil of ignorance”  Too much power & “Too big to fail” are problems 27

 Never delegate responsibility until recipient is an entity *and* known capable of fulfilling it  Don’t worry about killer robots exterminating humanity – we will always have equal abilities and they will have less of a “killer instinct”  Entities can protect themselves against errors & misuse/hijacking in a way that tools cannot  Diversity (differentiation) is *critically* needed  Humanocentrism is selfish and unethical 28

29