Developmental Artificial Intelligence 1 er April 2014 t oliviergeorgeon.com1/31.

Slides:



Advertisements
Similar presentations
Approaches, Tools, and Applications Islam A. El-Shaarawy Shoubra Faculty of Eng.
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California
Cognitive Systems, ICANN panel, Q1 What is machine intelligence, as beyond pattern matching, classification and prediction. What is machine intelligence,
Artificial Intelligence
Developmental Artificial Intelligence 27 March t oliviergeorgeon.com1/29.
The Logic of Intelligence Pei Wang Department of Computer and Information Sciences Temple University.
An Introduction to Artificial Intelligence Presented by : M. Eftekhari.
AI 授課教師:顏士淨 2013/09/12 1. Part I & Part II 2  Part I Artificial Intelligence 1 Introduction 2 Intelligent Agents Part II Problem Solving 3 Solving Problems.
A Brief History of Artificial Intelligence
Faculty of Management and Organization Emergence of social constructs and organizational behaviour How cognitive modelling enriches social simulation Martin.
Chapter 10 Artificial Intelligence © 2007 Pearson Addison-Wesley. All rights reserved.
An Introduction to Machine Learning In the area of AI (earlier) machine learning took a back seat to Expert Systems Expert system development usually consists.
PSU CS 370 – Artificial Intelligence Dr. Mohamed Tounsi Artificial Intelligence 1. Introduction Dr. M. Tounsi.
Jochen Triesch, UC San Diego, 1 Real Artificial Life: Robots.
Artificial Intelligence Overview John Paxton Montana State University August 14, 2003.
CSE 471/598,CBS598 Introduction to Artificial Intelligence Fall 2004
Artificial Intelligence Overview John Paxton Montana State University February 22, 2005
COMP-6600: Artificial Intelligence (Overview) A tentative overview of the course is as follows: 1. Introduction to Artificial Intelligence 2. Evolutionary.
D Goforth - COSC 4117, fall Notes  Program evaluation – Sept Student submissions  Mon. Sept 11, 4-5PM  FA 181 Comments to committee are.
Artificial Intelligence
Vedrana Vidulin Jožef Stefan Institute, Ljubljana, Slovenia
CSCE 315: Programming Studio Artificial Intelligence.
CPSC 171 Artificial Intelligence Read Chapter 14.
ARTIFICIAL INTELLIGENCE Introduction: Chapter 1. Outline Course overview What is AI? A brief history The state of the art.
1 AI and Agents CS 171/271 (Chapters 1 and 2) Some text and images in these slides were drawn from Russel & Norvig’s published material.
Chapter 11: Artificial Intelligence
1 Intelligent Systems Q: Where to start? A: At the beginning (1940) by Denis Riordan Reference Modern Artificial Intelligence began in the middle of the.
CISC4/681 Introduction to Artificial Intelligence1 Introduction – Artificial Intelligence a Modern Approach Russell and Norvig: 1.
19/13/2015CS360 AI & Robotics CS360: AI & Robotics TTh 9:25 am - 10:40 am Shereen Khoja
Introduction (Chapter 1) CPSC 386 Artificial Intelligence Ellen Walker Hiram College.
Chapter 10 Artificial Intelligence. © 2005 Pearson Addison-Wesley. All rights reserved 10-2 Chapter 10: Artificial Intelligence 10.1 Intelligence and.
Introduction: Chapter 1
Knowledge representation
Language evolution and robotics Paul Vogt Universiteit van Tilburg Nederland.
© Yilmaz “Agent-Directed Simulation – Course Outline” 1 Course Outline Dr. Levent Yilmaz M&SNet: Auburn M&S Laboratory Computer Science &
Introduction GAM 376 Robin Burke Winter Outline Introductions Syllabus.
{ Logic in Artificial Intelligence By Jeremy Wright Mathematical Logic April 10 th, 2012.
New SVS Implementation Joseph Xu Soar Workshop 31 June 2011.
Copyright 2004 Compsim LLC The Right Brain Architecture of a Holonic Manufacturing System Application of KEEL ® Technology to Holonic Manufacturing Systems.
Reductions (Section 5.1) Héctor Muñoz-Avila. Summary of Previous Lectures The Halting problem is undecidable: –HALT = { | M is a Turing machine, w  
Copyright © 2008 Pearson Education, Inc. Publishing as Pearson Addison-Wesley Chapter 11: Artificial Intelligence Computer Science: An Overview Tenth Edition.
1 New Y. X. Zhong Chinese Association for AI (CAAI) University of Posts & Telecom, Beijing -- to The Celebration of The 50 th Anniversary.
How Solvable Is Intelligence? A brief introduction to AI Dr. Richard Fox Department of Computer Science Northern Kentucky University.
1 CS 2710, ISSP 2610 Foundations of Artificial Intelligence introduction.
Curiosity-Driven Exploration with Planning Trajectories Tyler Streeter PhD Student, Human Computer Interaction Iowa State University
1 The main topics in AI Artificial intelligence can be considered under a number of headings: –Search (includes Game Playing). –Representing Knowledge.
Traditions of Communication Theory
Spring, 2005 CSE391 – Lecture 1 1 Introduction to Artificial Intelligence Martha Palmer CSE391 Spring, 2005.
FOUNDATIONS OF ARTIFICIAL INTELLIGENCE
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Vedrana Vidulin Jožef Stefan Institute, Ljubljana, Slovenia
INTRODUCTION TO COGNITIVE SCIENCE NURSING INFORMATICS CHAPTER 3 1.
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
1 Artificial Intelligence & Prolog Programming CSL 302.
Dinner for Two. Fuzzify Inputs Apply Fuzzy Operator.
Chapter 13 Artificial Intelligence. Artificial Intelligence – Figure 13.1 The Turing Test.
Artificial Intelligence
Learning Fast and Slow John E. Laird
Chapter 11: Artificial Intelligence
Artificial intelligence (AI)
Chapter 11: Artificial Intelligence
PART IV: The Potential of Algorithmic Machines.
First work in AI 1943 The name “Artificial Intelligence” coined 1956
Artificial Intelligence Mr. Sciame Section 2
Artificial Intelligence Lecture 2: Foundation of Artificial Intelligence By: Nur Uddin, Ph.D.
Course Outline Advanced Introduction Expert Systems Topics Problem
AI and Agents CS 171/271 (Chapters 1 and 2)
Artificial Intelligence
Embodiment: Does a laptop have a body?
Presentation transcript:

Developmental Artificial Intelligence 1 er April t oliviergeorgeon.com1/31

Outline Example – Demos with robots. Conclusion of the course. – Learning through experience. – Intermediary level of AI: semantic cognition. Exercise – Implement your self-programming agent (follow- up). oliviergeorgeon.com2/31

Robotics research oliviergeorgeon.com3/31 Bumper tactile sensor Panoramic camera Ground optic sensor

Experimentation oliviergeorgeon.com4/31

Validation paradigm Behavioral analysis rather than performance measure Developmental Learning in AI Newell & Simon (1972) goals drive problem solving; (1976) Physical Symbol Systems. Naively confusing perception and sensing (Crowley, 2014) Reinforcement Learning. POMDP, etc. “Constructivist Schema mechanisms” (Drescher, 1991). Theories Implementations Cybernetic control theory (Foerster, 1960) Horde (Sutton et al., 2011) SMC theory (O’Regan & Noë, 2001) Phenomenology (Dreyfus, 2007) “even more Heideggarian AI” “intrinsic motivation” (Oudeyer et al. 2007) Radical Interactionism, ECA (Georgeon et al., 2013) Symbolic Non-symbolic Learning by registering (Georgeon, 2014) Learning by experiencing perception-action inversion (Pfeifer & Scheier, 1994) Soar ACT-R CLARION Non-symbolic Machine Learning. Embodied AI. Kuipers et al., (1997) 5/28

The “environment” passes “symbols” to the agent as input. We encode the “semantics” of symbols in the agent. We implement a “reasoning engine”. (“symbolic” should not be mistaken with “discrete”) Symbolic AI 6/28

Validation paradigm Behavioral analysis rather than performance measure Developmental Learning in AI Newell & Simon (1972) goals drive problem solving; (1976) Physical Symbol Systems. Naively confusing perception and sensing (Crowley, 2014) Reinforcement Learning. POMDP, etc. “Constructivist Schema mechanisms” (Drescher, 1991). Theories Implementations Cybernetic control theory (Foerster, 1960) Horde (Sutton et al., 2011) SMC theory (O’Regan & Noë, 2001) Phenomenology (Dreyfus, 2007) “even more Heideggarian AI” “intrinsic motivation” (Oudeyer et al. 2007) Radical Interactionism, ECA (Georgeon et al., 2013) Symbolic Non-symbolic Learning by registering (Georgeon, 2014) Learning by experiencing perception-action inversion (Pfeifer & Scheier, 1994) Soar ACT-R CLARION Non-symbolic Machine Learning. Embodied AI. Kuipers et al., (1997) 7/28

The “environment” passes “observations” to the agent as input. The relation state -> observation is “statistically” a surjection. We implement algorithms that assume that a given “state” induces a given “observation” (although partial and subject to noise) Learning by registering 8/28

Validation paradigm Behavioral analysis rather than performance measure Developmental Learning in AI Newell & Simon (1972) goals drive problem solving; (1976) Physical Symbol Systems. Naively confusing perception and sensing (Crowley, 2014) Reinforcement Learning. POMDP, etc. “Constructivist Schema mechanisms” (Drescher, 1991). Theories Implementations Cybernetic control theory (Foerster, 1960) Horde (Sutton et al., 2011) SMC theory (O’Regan & Noë, 2001) Phenomenology (Dreyfus, 2007) “even more Heideggarian AI” “intrinsic motivation” (Oudeyer et al. 2007) Radical Interactionism, ECA (Georgeon et al., 2013) Symbolic Non-symbolic Learning by registering (Georgeon, 2014) Learning by experiencing perception-action inversion (Pfeifer & Scheier, 1994) Soar ACT-R CLARION Non-symbolic Machine Learning. Embodied AI. Kuipers et al., (1997) 9/28

Apprentissage par l’observation (Georgeon, 2014) Apprentissage par l’expérience Positionnement dans le cadre de l’IA Newell & Simon (1972) goals drive problem solving; (1976) Physical Symbol Systems. En confondant naivement “input” et “perception” (Crowley, 2014) Reinforcement Learning. “intrinsic motivation” (Oudeyer et al. 2007) Symbolic Non-symbolic Apprentissage « désincarnée » Cognition située (Clancey 1992) 10/28 Neural networks. Machine learning. A* etc. Validation paradigm Behavioral analysis rather than performance measure Theories Implementations Horde (Sutton et al., 2011) SMC theory (O’Regan & Noë, 2001) Phenomenology (Dreyfus, 2007) “even more Heideggarian AI” Radical Interactionism, ECA (Georgeon et al., 2013) perception-action inversion (Pfeifer & Scheier, 1994)

Change blindness oliviergeorgeon.com 11/23

Time The environment passes the “result” of an “experiment” initiated by the agent. This is counter intuitive ! We implement algorithms that learn to “master the laws of sensorimotor contingencies” (O’Regan & Noë, 2001). Learning by experiencing 12/28

Accept the counter-intuitiveness We have the impression that the sun revolves around the earth. – False impression! (Copernic, 1519) We have the impression to receive input data about the state of the world. – False impression! (Philosophy of knowledge since the enlightenments, at least). – How to translate this counter-intuitiveness into the algorithms? oliviergeorgeon.com13/23

The stakes: semantic cognition Stimumuls-response adaptation Reasoning and language Semantic cognition Reinforcement-learning, neural nets, traditional machine learning. Rule-based systems, Ontologies, traditional AI. Knowledge-grounding, sense-making, Self-programming. 14/9oliviergeorgeon.com

Conclusion Think in terms of interactions – Rather than separating perception et action. Think in terms of generated behaviors – Rather than in terms of learned data. Keep your critical thinking – Invent new approaches !

Invent new approaches “Hard problem of AI” Formalized problem Etc. A E A E

Exercice oliviergeorgeon.com Part 4. 17/31

Environnement 3 - modified Behave like Environment 0 up to cycle 5, then like environment 1 up to cycle 10, then like environment 0. Implementation – If (step 10) If (experiment = e1) then result = r1 If (experiment = e2) then result = r2 – Else If (experiment = e1) then result = r2 If (experiment = e2) then result = r1 – Step++ oliviergeorgeon.com18/23

Agent 3 in Environment 3 oliviergeorgeon.com 0. e1r1,-1,0 1. e1r1,-1,0 learn (e1r1e1r1),-2,1 activated (e1r1e1r1),-2,1 propose e1,-1 2. e2r2,1,0 learn (e1r1e2r2),0,1 3. e2r2,1,0 learn (e2r2e2r2),2,1 activated (e2r2e2r2),2,1 propose e2,1 4. e2r2,1,0 activated (e2r2e2r2),2,2 propose e2,2 5. e2r1,-1,0 learn (e2r2e2r1),0,1 6. e2r1,-1,0 learn (e2r1e2r1),-2,1 activated (e2r1e2r1),-2,1 propose e2,-1 7. e1r2,1,0 learn (e2r1e1r2),0,1 8. e1r2,1,0 learn (e1r2e1r2),2,1 activated (e1r2e1r2),2,1 propose e1,1 9. e1r2,1,0 activated (e1r2e1r2),2,2 propose e1,2 10. e1r1,-1,0 learn (e1r2e1r1),0,1 activated (e1r1e2r2),0,1 activated (e1r1e1r1),-2,1 propose e2,1 propose e1, e2r2,1,0 activated (e2r2e2r1),0,1 activated (e2r2e2r2),2,2 propose e2,1 12. e2r2,1,0 activated (e2r2e2r1),0,1 activated (e2r2e2r2),2,3 propose e2,2 13. e2r2,1,0 Environnement 0Environnement 1Environnement 0 19/23

Time Activated i12 Propose … i12 i t-3 i t-2 i t-4 i t-1 i t = i11 i11 PRESENT FUTUREPAST Learn AGENT itit (i t-1,i t ) Activate i t-1 oliviergeorgeon.com20/31 Principle of Agent 3 (i11,i11) i11 e1 Choose Execute (i12,i12)

Environment 4 Returns result r2 only after twice the same experience. e1 -> r1, e1 -> r2, e1 -> r1, e1-> r1, … e1->r1, e2 - > r1, e2->r2, e2 -> r1, …, e2 -> r1, e1 -> r1, e1 -> r2, e2 -> r1, e2 -> r2, e1 -> r1, e1 -> r2, … If (experience t-2 !=experience t && experience t-1 ==experience t ) result = r2; else result = r1; oliviergeorgeon.com21/33

Rapport Agent 1 – Explanation of the code. – Traces in environments 0 and 1 with different motivations. – Explanation of the behaviors. Agent 2 – Explanation of the code. – Traces in environments 0 to 2 with different motivations. – Explanation of the behaviors. Agent 3 – Explanation of the code. – Traces in environments 0 to 4 with différent motivations. – Explanation of the behaviors. Conclusion – What would be the next step towards Agent 4 able to adapt to Environments 1 to 4? oliviergeorgeon.com22/23