Safely Crowd-Sourcing Critical Mass for a Self-Improving Human-Level Learner (”Seed AI”) Mark R. Waser Digital Wisdom.

Slides:



Advertisements
Similar presentations
Pat Langley School of Computing and Informatics Arizona State University Tempe, Arizona A Cognitive Architecture for Integrated.
Advertisements

Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
EECE499 Computers and Nuclear Energy Electrical and Computer Eng Howard University Dr. Charles Kim Fall 2013 Webpage:
Supporting Business Decisions Expert Systems. Expert system definition Possible working definition of an expert system: –“A computer system with a knowledge.
Artificial Intelligence and Lisp TDDC65 Course leader: Erik Sandewall Lab assistants: Henrik Lundberg, John Olsson Administrator: Anna Grabska Eklund Webpage:
A model of Consciousness With neural networks By: Hadiseh Nowparast.
Mark R. Waser Digital Wisdom Institute
 delivers evidence that a solution developed achieves the purpose for which it was designed.  The purpose of evaluation is to demonstrate the utility,
Explicit Direct Instruction Critical Elements. Teaching Grade Level Content  The higher the grade the greater the disparity  Test Scores go up when.
A Summary of the Article “Intelligence Without Representation” by Rodney A. Brooks (1987) Presented by Dain Finn.
Brent Dingle Marco A. Morales Texas A&M University, Spring 2002
Chapter 13 Embedded Systems
1 Chapter 8 Virtual Memory Virtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memory.
Chapter 11 Operating Systems
4-1 Chapter 4: PRACTICAL REASONING An Introduction to MultiAgent Systems
Agent-Based Acceptability-Oriented Computing International Symposium on Software Reliability Engineering Fast Abstract by Shana Hyvat.
Meaningful Learning in an Information Age
Philosophy A philosophy is a system of beliefs about reality.
EMBEDDED SOFTWARE Team victorious Team Victorious.
Chapter 3 Memory Management: Virtual Memory
Taxonomies of Learning Foundational Knowledge: Understanding and remembering information and ideas. Application: Skills Critical, creative, and practical.
Artificial Intelligence CIS 479/579 Bruce R. Maxim UM-Dearborn.
A Game-Theoretically Optimal Basis For Safe and Ethical Intelligence: Mark R. Waser A Thanksgiving.
Service-Learning to Enhance Academic Achievement Shelley H. Billig Stephany Brown RMC Research Corporation.
SLB /04/07 Thinking and Communicating “The Spiritual Life is Thinking!” (R.B. Thieme, Jr.)
Computer Science Open Research Questions Adversary models –Define/Formalize adversary models Need to incorporate characteristics of new technologies and.
File Processing - Database Overview MVNC1 DATABASE SYSTEMS Overview.
Copyright 2004 Compsim LLC The Right Brain Architecture of a Holonic Manufacturing System Application of KEEL ® Technology to Holonic Manufacturing Systems.
Development of Indicators for Integrated System Validation Leena Norros & Maaria Nuutinen & Paula Savioja VTT Industrial Systems: Work, Organisation and.
Semantic Interoperability Berlin, 25 March 2008 Semantically Enhanced Resource Allocator Marc de Palol Jorge Ejarque, Iñigo Goiri, Ferran Julià, Jordi.
Software Requirements Engineering: What, Why, Who, When, and How
HOW PROFESSIONALS LEARN AND ACQUIRE EXPERTISE  Model of professionals as learners:  How professionals know  How professionals incorporate knowledge.
IDA: A Cognitive Agent Architecture Lisa Soros. Outline Machine Consciousness Global Workspace Theory IDA: A Cognitive Agent Architecture.
A Software Agent with a Self? Machine Consciousness: Complexity Aspects 29 Sept to 1st October '03, ISI, Torino, Italy Stan Franklin and the “Conscious”
Mark R. Waser Digital Wisdom Institute
Autonomy and Artificiality Margaret A. Boden Hojin Youn.
Prepare by : Ihab shahtout.  Overview  To give an overview of fixed priority schedule  Scheduling and Fixed Priority Scheduling.
Operating Systems David Goldschmidt, Ph.D. Computer Science The College of Saint Rose CIS 432.
Cloud Computing Project By:Jessica, Fadiah, and Bill.
Question 21 “A system is a set of parts coordinated to accomplish a set of goals” West Churchman. Give three main characteristics of a system. What are.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
What Do Students Need?  Each student needs to be like all others and at the same time, different from all others.  Students need unconditional acceptance.
Consciousness in Human and Machine A Theory (with falsifiable predictions) Richard Loosemore.
Mark R. Waser Digital Wisdom Institute
Human Abilities 2 How do people think? 1. Agenda Memory Cognitive Processes – Implications Recap 2.
Ethics for Machines J Storrs Hall. Stick-Built AI ● No existing AI is intelligent ● Intelligence implies the ability to learn ● Existing AIs are really.
Technical Goals for the BICA Community Mark R. Waser
Brooks’ Subsumption Architecture EEL 6838 T. Ryan Fitz-Gibbon 1/24/2004.
Introduction of Intelligent Agents
Subsumption Architecture and Nouvelle AI Arpit Maheshwari Nihit Gupta Saransh Gupta Swapnil Srivastava.
Concurrency Properties. Correctness In sequential programs, rerunning a program with the same input will always give the same result, so it makes sense.
A Roadmap towards Machine Intelligence
Cognitive Modular Neural Architecture
© 2011 The McGraw-Hill Companies, Inc. Instructor name Class Title, Term/Semester, Year Institution Intelligence Introductory Psychology Concepts.
What is a Microprocessor ? A microprocessor consists of an ALU to perform arithmetic and logic manipulations, registers, and a control unit Its has some.
Lecture 4 Mechanisms & Kernel for NOSs. Mechanisms for Network Operating Systems  Network operating systems provide three basic mechanisms that support.
Development of Expertise. Expertise We are very good (perhaps even expert) at many things: - driving - reading - writing - talking What are some other.
© 2011 The McGraw-Hill Companies, Inc. Instructor name Class Title, Term/Semester, Year Institution Theories of Intelligence Introductory Psychology Concepts.
Decision Making ET 305, Spring 2016
Computer Architecture Organization and Architecture
Technical Problems in Long-Term AI SafetyAndrew Critch Technical (and Non-Technical) Problems in Long-Term AI Safety Andrew Critch.
Decision Support and Business Intelligence Systems (9 th Ed., Prentice Hall) Chapter 12: Artificial Intelligence and Expert Systems.
Deep Learning of Morality (Inverse Reinforcement Learning)
Ensuring Safe AI via a Moral Emotion Motivational & Control System
Chapter 5 – Requirements Engineering
Chapter 7 Psychology: Memory.
Human intellect.
BASICS OF SOFTWARE TESTING Chapter 1. Topics to be covered 1. Humans and errors, 2. Testing and Debugging, 3. Software Quality- Correctness Reliability.
Chapter 3 Interlanguage.
Presentation transcript:

Safely Crowd-Sourcing Critical Mass for a Self-Improving Human-Level Learner (”Seed AI”) Mark R. Waser Digital Wisdom Institute

Goal To safely create a human-level (or higher) artificial general intelligence (AGI)  Without debating what intelligence is  Without debating consciousness, emotions, etc.  Without constantly re-inventing the wheel Through self-improvement/learning

Self  Required for self-improvement  The complete loop of a process (or a physical entity) modifying itself  The mere fact of being self-referential causes a self, a soul, a consciousness, an “I” to arise out of mere matter (Hofstadter, I Am a Strange Loop)  Must, particularly if indeterminate in behavior, necessarily and sufficiently be defined an entity rather than an object  Humans innately tend to do this with the “pathetic fallacy”  Tri-partite  Physical hardware  “Personal” memory/knowledge base  Currently running processes

Learning  Learning is the functional integration of knowledge  A “learner” must be capable of integrating all acquired knowledge into its world model and skill portfolio to a sufficient extent that it is both immediately usable and can be built upon.  “Memorization” is NOT learning (data only – Watson)  Mere “algorithm execution” is NOT learning  *unless* it is self-modifying the algorithms recursively  “Discovery” is not necessary for learning  Although that is a skill that will be learned quickly enough

 Information integration theory (Tononi 2004) claims that consciousness is one and the same as a system’s capacity to integrate information  A “knowledge integrator”  incorporates knowledge into it’s world model  “understands” and therefore can predict  refactors knowledge and data for usability (/speed)  A word problem solver  and, eventually, an analogy builder Knowledge Integration

Critical Mass A reasonably achievable minimal set of initial cognitive and learning characteristics such that a learner starting anywhere above the critical knowledge will acquire the vital knowledge that a typical human learner would be able to acquire. Samsonovich 2011  Effectively, once a learner truly knows how to learn, it is capable of learning anything – subject, of course, to time and other constraints.  Thus, a learner above critical mass is a “seed AGI” fully capable of growing into a full-blown human-level (or, more likely, higher-level) artificial general intelligence.

Critical Components I: Self-Knowledge & Reflection  A self must know itself to be a self  Composed of three parts:  The running processes  The personal memory/knowledge base  The physical hardware  Must start with:  A competent model of each  Sensors to detect changes and their effects

Critical Components II: Explicit Goals  Do not defect from the community  Do not become too large/powerful  Acquire and integrate knowledge  Instrumental goals

Critical Components III: Reliability  Self-Control, Integrity, Autonomy, Responsibility  In “predictive control” of its own state and that of the physical objects that support it  Note: This is a marked deviation from the human example.

Humans are....  Evolved to self-deceive in order to better deceive others (Trivers 1991)  Unable to directly sense agency (Aarts et al. 2005)  Prone to false illusory experiences of self- authorship (Buehner and Humphreys 2009)  Unable to correctly retrieve the reasoning behind moral judgments (Hauser et al. 2007)  Almost always unaware of what morality is and why it should be practiced....

The Function of Morality “to suppress or regulate selfishness and make cooperative social life possible” J. Haidt & S. Kesebir Handbook of Social Psychology

Architecture Processes will be divided into three main classes:  Operating system processes  Subconscious/tool processes  One serial consciousness/learner process (CLP) The CLP will be able to create, modify and/or influence many of the subconscious/tool processes. The CLP will NOT be given access to modify operating system processes  Indeed, it will have multiple/redundant logical, emotional & moral reasons to seriously convince it not to even try

Operating System Architecture  Open, Pluggable, Service-Oriented/Message-Passing  Quickly adopt novel input streams  Handle resource requests and allocation  Provide connectivity between components  Safety Features  Act as a “black box” security monitor capable of reporting problems without the consciousness’s awareness  Able to “manage” the CLP by manipulating the amount of processor time and memory available to it (assuming that the normal subconscious processes are unable to do so)  Other protections against hostile humans, inept builders, and the learner itself may be implemented as well  Robot Operating System (ROS)

Automated Predictive World Model An active copy of the CLP’s world model  Is the most important subconscious process  Will serve as an interface to the “real” world  CLP effectively is a homunculous (subjective consciousness?)  Will be both reactive and predictive  Will generate “anomaly interrupts” upon deviations from expectations as an approach to solving the “brittleness” problem (Perlis 2008)  Will contain certain relatively immutable concepts to serve as anchors both for emotions and for ensuring safety (trigger patterns – Ohman et al. 2001)

Anchors & Emotions  Anchors create a multiple attachment point model which is much safer than the single-point-of-failure, top-down-only approach of “machine enslavement” advocated by the SIAI (Yudkowsky 2001)  Emotions will be generated by the subconscious processes as “actionable qualia” to inform the CLP and will also bias the selection and urgency tags of information relayed via the predictive model  Violations of the cooperative social living “moral” system will result in a flood of urgently–tagged anomaly interrupts demanding that consciousness resources be expended to “solve the problem”

Conscious Learning Process (CLP)  The goal is to provide as many optional structures and standards to support and speed development as much as possible while not restricting possibilities beyond what is absolutely required for safety.  We believe the best way to do this is with a blackboard system similar to Learning IDA (Baars and Franklin 2007).  The CLP acts like the Governing Board of the Policy Governance model (Carver 2006) to create a coherent, consistent, integrated narrative plan of action to fulfill the goals of the larger self.

A Social Media/Crowd-Sourcing Plan  Open Architecture  Open Source Operating System (ROS)  Pluggable Modules/Services/Interfaces  “Blackboard” Consciousness  Community-designed/vetted goals/morality  Community-designed/vetted anchor points  Emotions & Attention as safety mechanisms  Critical mass composed of reflection-capable mix & match components  Scripts to build from base to any level  Contests/Gamification to design & build

Safely Crowd-Sourcing Critical Mass for a Self-Improving Human-Level Learner (”Seed AI”) Mark R. Waser Digital Wisdom Institute