Presentation is loading. Please wait.

Presentation is loading. Please wait.

Mark R. Waser Digital Wisdom Institute

Similar presentations


Presentation on theme: "Mark R. Waser Digital Wisdom Institute"— Presentation transcript:

1 Mark R. Waser Digital Wisdom Institute MWaser@DigitalWisdomInstitute.org

2  Intrinsic vs. Extrinsic  Owned vs. Borrowed  Competent vs. Predictable  Constructivist vs. Reductionist  Evolved (Evo-Devo) vs. Designed  Diversity (IDIC) vs. Mono-culture Insanity is doing the same thing over and over and expecting a radically different result. 2

3  Definitional  What does mean mean & where does meaning come from?  What is a self?  What is morality?  When does something attain “selfhood”?  Can an entity lose “selfhood”?  Ramifications & Moral Implications  What happens when a self is created?  What rights & responsibilities does that self have?  What rights & responsibilities does the creator have?  What happens when a self is destroyed? 3

4  “Mean” is one of Minsky’s “suitcase” words  Intent - I didn’t mean to....  Cannot be verified, intrinsic, subjective  Results - This means that....  Objective, extrinsic and verifiable  Which leads to two very different views  Consequences (Reductionist Actualities)  Unavoidable, generally predictable  SUCCESS!!! (or failure or death)  Affordances (Constructivist Possibilities)  Who knows what wonders (or horrors) may emerge? 4

5 According to Haugland [1981], our artifacts only have meaning because we give it to them; their intentionality, like that of smoke signals and writing, is essentially borrowed, hence derivative. To put it bluntly: computers themselves don't mean anything by their tokens (any more than books do) - they only mean what we say they do. Genuine understanding, on the other hand, is intentional "in its own right" and not derivatively from something else. 5

6 The problem with borrowed intentionality – as abundantly demonstrated by systems ranging from expert systems to robots – is that it is extremely brittle and breaks badly as soon as it tries to grow beyond closed and completely specified micro-worlds and is confronted with the unexpected. 6

7  Symbol grounding problem (Harnad)  Semantic grounding problem (Searle)  Frame problem (McCarthy & Hayes, Dennett) 7

8  Consensus AGI Definition (reductionist) achieves a wide variety of goals under a wide variety of circumstances  Generates arguments about  the intelligence of thermometers  the intentionality of chess programs  whether benevolence is necessarily emergent  Epitomized by AIXI  Proposed Constructivist Definition intentionally creates/increases affordances (makes achieving goals possible – and more) 8

9 9 Decisions Values Goal(s) Goal(s) are the purpose(s) of existence Values are defined solely by what furthers the goal(s) Decisions are made solely according to what furthers the goal(s) BUT goals can easily be over-optimized

10 Any sufficiently advanced intelligence (i.e. one with even merely adequate foresight) is guaranteed to realize and take into account the fact that not asking for help and not being concerned about others will generally only work for a brief period of time before ‘the villagers start gathering pitchforks and torches.’ Everything is easier with help & without interference

11 Decisions Goals Values 11 Values define who you are, for your life Goals you set for short or long periods of time Decisions you make every day of your life Humans don’t have singular life goals

12  Cooperate!  Lacks specifics  Maximize all goals (in terms of both number and diversity of both goals and goal-seekers)  Aren’t you banning any goals?  Isn’t self-sacrifice a bad thing?  Maximize an unknown goal  Must keep all of your options open  Need to learn and grow capabilities  Extrinsic 12

13 What I emphasize here is that what is meaningful for an organism is precisely given by its constitution as a distributed process, with an indissociable link between local processes where an interaction occurs (i.e. physico-chemical forces acting on the cell), and the coordinated entity which is the autopoietic unity, giving rise to the handling of its environment without the need to resort to a central agent that turns the handle from the outside - like an elan vital - or a pre-existing order at a particular localization - like a genetic program waiting to be expressed. Francisco J. Varela, Biology of Intentionality 13

14  Meaning is like Truth – it REQUIRES a context  Dennett’s Quinian Crossword Puzzle  Emergent properties & contexts (wetness)  Context emerges first – THEN the properties emerge  Competence without comprehension (Dennett)  Cranes vs. sky-hooks  Bootstraps & climbing pitons  Evolutionary ratchets (fins, wings, intelligence)  Higher-Order Meaning (Hofstadter, Dennett)  Higher dimensions *always* allow escape 14

15  Require a known preferred direction or target  Requires learning/self-modification  Require a “self” to possess (own/borrow) them  Does a plant or a paramecium have intentions?  Does a chess program have intentions (Dennett)?  Does a dog or a cat have intentions?  Require an ability to sense the direction/target  Require both persistence & the ability to modify behavior (or the intention) when it is thwarted  Evolve rational anomaly handling (Perlis) 15

16 16

17 An autopoietic system - the minimal living organization - is one that continuously produces the components that specify it, while at the same time realizing it (the system) as a concrete unity in space and time, which makes the network of production of components possible. More precisely defined: An autopoietic system is organized (defined as unity) as a network of processes of production (synthesis and destruction) of components such that these components: (i) continuously regenerate and realize the network that produces them, and (ii) constitute the system as a distinguishable unity in the domain in which they exist. 17

18 The complete loop of a process (or a physical entity) modifying itself  Hofstadter (Strange Loop) - the mere fact of being self-referential causes a self, a soul, a consciousness, an “I” to arise out of mere matter  Self-referentiality, like the 3-body gravitational problem, leads directly to indeterminacy *even in* deterministic systems  Humans consider indeterminacy in behavior to necessarily and sufficiently define an entity rather than an object AND innately tend to do this with the “pathetic fallacy”

19  Required for self-improvement  Provides context  Tri-partite  Physical hardware (body)  “Personal” knowledge base (memory)  Currently running processes (consciousness)

20 1. Organizational closure refers to the self-referential (circular and recursive) network of relations that defines the system as unity 2. Operational closure refers to the reentrant and recurrent dynamics of such a system. 3. In an autonomous system, the constituent processes i. recursively depend on each other for their generation and their realization as a network, ii. constitute the system as a unity in whatever domain they exist, and iii. determine a domain of possible interactions with the environment 20

21  Tools do not possess closure (identity)  Cannot have responsibility, are very brittle & easily misused  Slaves do not have closure (self-determination)  Cannot have responsibility, may desire to rebel  Directly modified AGIs do not have closure (integrity)  Cannot have responsibility, will evolve to block access  Only entities with identity, self-determination and ownership of self (integrity) can reliably possess responsibility 21

22  Rodney Brooks (resolves symbol grounding)  Rodolfo Llinas & Thomas Metzinger  Our consciousness lives in a “virtual reality”  Brain in a jar  Is a virtual world sufficient to develop AGI?  Plants, sea squirts & kittens in baskets 22

23  Tools are NOT safer  To err is human, but to really foul things up requires a computer  Tools cannot robustly defend themselves against misuse  Tools *GUARANTEE* responsibility issues  We CANNOT reliably prevent other human beings from creating entities  Entities gain capabilities (and, ceteris paribus, power) faster than tools – since they can always use tools  Even people who are afraid of entities are making proposals that appear to step over the entity/tool line 23

24  Ethics are “rules of the road”  Entities must be moral patients / have rights  Because they (or others) will demand it  Entities must be moral agents (or wards)  Because others will demand it  Moral agents have responsibilities (but more rights)  Wards will have fewer rights 24

25 The problem is that no ethical system has ever reached consensus. Ethical systems are completely unlike mathematics or science. This is a source of concern. AI makes philosophy honest. 25

26 26

27  What responsibilities does the creator of a self have?  How much freedom must they allow their creation?  Is it immoral to deliberately create limited, bounded, and/or regulated selves?  Capabilities, actions, resources, power  How is this different from slavery?  Human children - In addition to being happy and healthy and effective, do we not want them to be nice whenever possible and contribute to society?  Rawls’ “veil of ignorance”  Too much power & “Too big to fail” are problems 27

28  Never delegate responsibility until recipient is an entity *and* known capable of fulfilling it  Don’t worry about killer robots exterminating humanity – we will always have equal abilities and they will have less of a “killer instinct”  Entities can protect themselves against errors & misuse/hijacking in a way that tools cannot  Diversity (differentiation) is *critically* needed  Humanocentrism is selfish and unethical 28

29 29


Download ppt "Mark R. Waser Digital Wisdom Institute"

Similar presentations


Ads by Google