© Maciej Komosiński, Pete Mandik Framsticks mind experiments based on: works of prof. Pete Mandik Cognitive Science Laboratory Department of Philosophy.

Slides:



Advertisements
Similar presentations
Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Advertisements

Pat Langley Computational Learning Laboratory Center for the Study of Language and Information Stanford University, Stanford, California USA
Pat Langley Institute for the Study of Learning and Expertise Palo Alto, California A Cognitive Architecture for Complex Learning.
Introduction to Neural Networks
THE WORM Caenorhabditis elegans as a model organism.
ARCHITECTURES FOR ARTIFICIAL INTELLIGENCE SYSTEMS
Summer 2011 Tuesday, 8/ No supposition seems to me more natural than that there is no process in the brain correlated with associating or with.
Can neural realizations be neither holistic nor localized? Commentary on Anderson’s redeployment hypothesis Pete Mandik Chairman, Department of Philosophy.
Behavioral Theories of Motor Control
Chapter Thirteen Conclusion: Where We Go From Here.
Reductive and Representational Explanation in Synthetic Neuroethology Pete Mandik Assistant Professor of Philosophy Coordinator, Cognitive Science Laboratory.
5-1 Chapter 5: REACTIVE AND HYBRID ARCHITECTURES.
Neural Representation, Embodied and Evolved Pete Mandik Chairman, Department of Philosophy Coordinator, Cognitive Science Laboratory William Paterson University,
Artificial Intelligence Chapter 5 State Machines.
An Introduction to Artificial Intelligence Presented by : M. Eftekhari.
Tuomas Sandholm Carnegie Mellon University Computer Science Department
10/10/02 Zurich-02 How does the lamprey swim? All you ever wanted to know about the CPG for lamprey locomotion The role of coupling, mechanics and sensory.
A Summary of the Article “Intelligence Without Representation” by Rodney A. Brooks (1987) Presented by Dain Finn.
Cognitive & Linguistic Sciences What is cognitive science anyway? Why is it interdisciplinary? Why do we need to learn about information processors?
Carla P. Gomes CS4700 CS 4700: Foundations of Artificial Intelligence Prof. Carla P. Gomes Module: Neural Networks: Concepts (Reading:
Introduction to Cognitive Science Lecture #1 : INTRODUCTION Joe Lau Philosophy HKU.
EE141 1 Broca’s area Pars opercularis Motor cortexSomatosensory cortex Sensory associative cortex Primary Auditory cortex Wernicke’s area Visual associative.
Chapter 4: Towards a Theory of Intelligence Gert Kootstra.
Fundamentals of Cognitive Psychology
Mental Development and Representation Building through Motivated Learning Janusz A. Starzyk, Ohio University, USA, Pawel Raif, Silesian University of Technology,
Chapter Seven The Network Approach: Mind as a Web.
Overview and History of Cognitive Science. How do minds work? What would an answer to this question look like? What is a mind? What is intelligence? How.
Summer 2011 Wednesday, 8/3. Biological Approaches to Understanding the Mind Connectionism is not the only approach to understanding the mind that draws.
UNIVERSITY OF SOUTH CAROLINA Department of Computer Science and Engineering CSCE 580 Artificial Intelligence Ch.2 [P]: Agent Architectures and Hierarchical.
Using Neural Networks to Improve the Performance of an Autonomous Vehicle By Jon Cory and Matt Edwards.
Theories of Development. Cognitive Development Early psychologists believed that children were not capable of meaningful thought and that there actions.
Cognitive level of Analysis
Cognitive Psychology, 2 nd Ed. Chapter 1. Defining Cognitive Psychology The study of human mental processes and their role in thinking, feeling, and behaving.
컴퓨터 그래픽스 분야의 캐릭터 자동생성을 위하여 인공생명의 여러 가지 방법론이 어떻게 적용될 수 있는지 이해
Consciousness & the Computational Interface between Egocentric & Allocentric Representations Pete Mandik Associate Professor Coordinator, Cognitive Science.
Chapter 14: Artificial Intelligence Invitation to Computer Science, C++ Version, Third Edition.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
 The most intelligent device - “Human Brain”.  The machine that revolutionized the whole world – “computer”.  Inefficiencies of the computer has lead.
Advances in Modeling Neocortex and its impact on machine intelligence Jeff Hawkins Numenta Inc. VS265 Neural Computation December 2, 2010 Documentation.
NEURAL NETWORKS FOR DATA MINING
Cognitive Information Processing Dr. K. A. Korb University of Jos.
Robotica Lecture 3. 2 Robot Control Robot control is the mean by which the sensing and action of a robot are coordinated The infinitely many possible.
Learning Theories with Technology Learning Theories with Technology By: Jessica Rubinstein.
A New Theory of Neocortex and Its Implications for Machine Intelligence TTI/Vanguard, All that Data February 9, 2005 Jeff Hawkins Director The Redwood.
Artificial Intelligence By Michelle Witcofsky And Evan Flanagan.
Methodology of Simulations n CS/PY 399 Lecture Presentation # 19 n February 21, 2001 n Mount Union College.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Evolving Motor Techniques for Artificial Life Kelley Hecker Period 7.
Brooks’ Subsumption Architecture EEL 6838 T. Ryan Fitz-Gibbon 1/24/2004.
Behavior Based Systems Behavior Based Systems. Key aspects of the behavior-based methodology: Situatedness: Situatedness:  The robot is an entity situated.
Introduction of Intelligent Agents
Part 2: Motor Control Chapter 5 Motor Control Theories.
Philosophy and Cognitive Science Conceptual Role Semantics Joe Lau PhilosophyHKU.
“The evolution of cognition --a hypothesis” By Holk Cruse Presentation facilitated by Bethany Sills and Lauren Feliz Bethany Sills and Lauren Feliz Cruse,
Chapter 12. Some Requirements for Human-Like Robots in Creating Brain-Like Intelligence, Aaron Solman. Course: Robots Learning from Humans Hur, Woo-Sol.
RULES Patty Nordstrom Hien Nguyen. "Cognitive Skills are Realized by Production Rules"
Artificial Intelligence: Research and Collaborative Possibilities a presentation by: Dr. Ernest L. McDuffie, Assistant Professor Department of Computer.
Bridges To Computing General Information: This document was created for use in the "Bridges to Computing" project of Brooklyn College. You are invited.
The Mind And Body Problem Mr. DeZilva.  Humans are characterised by the body (physical) and the mind (consciousness) These are the fundamental properties.
Evolution and Learning “Exploring Adaptive Agency I/II” Miller & Todd, 1990.
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
Robot Intelligence Technology Lab. 10. Complex Hardware Morphologies: Walking Machines Presented by In-Won Park
Robot Intelligence Technology Lab. Evolutionary Robotics Chapter 3. How to Evolve Robots Chi-Ho Lee.
Chapter 9 Knowledge. Some Questions to Consider Why is it difficult to decide if a particular object belongs to a particular category, such as “chair,”
CSCE 580 Artificial Intelligence Ch
State Machines Chapter 5.
Artificial Intelligence
CS4341 Introduction to Artificial Intelligence
The Network Approach: Mind as a Web
Biological Based Networks
Presentation transcript:

© Maciej Komosiński, Pete Mandik Framsticks mind experiments based on: works of prof. Pete Mandik Cognitive Science Laboratory Department of Philosophy William Paterson University of New Jersey

Philosophy: core question what is the relation of the mind to the world... such that the mind has representations of the world? materialistic view: how brains (physical systems) have representations of the world?

What is representation? a state of an organism (its brain) that carries information about environmental and bodily states (Dretske 1988; Milikan 1984; 1993) discussion: information, isomorphism, encoding, decoding

The question in two versions synchronic –what patterns of structure and activity in the world support the representation of objects, properties, and states? diachronic –what happened over time for physical structures to have representational contents?

Neurosemantics what it means for states of NNs to have representational contents? related to: –philosophy of neuroscience –philosophy of mind

„Having neurons, build mind” what would you ask for? –more neurons? (complex brain?) –body? (embodiment?) –evolutionary mechanisms? –knowledge about required mind states/representations? what is the simplest set of conditions for the implementation of mental representations in NNs?

Diachronic approach: temporal and casual priority of representations and nervous systems NSs existed before or after organisms with representational states? did NSs evolve in order to provide the means for representing? or did they serve some non-representational function first?

AI and computational neuroscience synthetic approach synchronic aspects of the problem of representation construction of NN controllers testing hypotheses about possible NN architectures supporting intelligent behavior but...

Diachronic aspects not tested how might the proposed NN architectures have evolved from other systems? artificial life techniques are proper for such questions!

Differences GOFAI (good, old-fashioned AI) modeling simplified subsystems of agents focused on designed aspects Artificial Life modeling simplified agents focused on evolved aspects holism

Artificial life so far focused on evolving „minimally cognitive behavior” –obstacle avoidance –food finding often in opposition to the assumption that intelligent behavior requires mental representation and computation

Representations not needed? Rodney Brooks: „representation is the wrong unit of abstraction in building... intelligent systems” Randal Beer: „the design of the animat’s nervous system is simply such that it... synthesizes behavior that is appropriate to the... circumstances”

Reactive agents! surprising variety of behaviors in spite of lack of internal representations of environments very simple agents exhibit minimally cognitive behavior agents do not need internal states, so there is no inputs/outputs transformation, no computation, and no representation

Control systems in food finders (positive chemotaxis) modular – two control systems: –locomotion (for example CPG) –spatial location of the stimulus non-modular –swims around continuously in wide curved arcs –smell sensor active: arcs become tighter, food is absorbed –smell sensor high activity: CPG stops

Representation vs. Modularity modular position of food 2D: near-far, left-right decoded by the single turning muscle non-modular proximity of food 1D: near-far decoded by muscular system  curvature of arcs

Advantages of representation optimization: identical bodies, similar NNs fitness: ability to find food optimized: NN weights only smell sensors: none, one, or two NN topology: outputs for muscles (9), inputs for smell sensors, 2 hidden layers with all feed-forward and (for one layer) all feedback connections averaged from five evolutionary runs per creature

Experimental results Number of smell sensors

Conclusions simple ANNs are capable of supporting representations of spatial locations of stimuli Alife as the way of creation of thought experiments of indefinite complexity (Dennett, 1998) can we build a gradualist bridge from simple amoeba-like automata to highly purposive intentional systems, with goals, beliefs, etc.? (Dennett, 1998) representational and computational systems will figure very early in the evolutionary trajectory from mindless automata to mindless machines (Mandik 2001)

Four categories of mobile animats Creatures of Pure Will (no sensory inputs) Creatures of Pure Vision (perceive environment) The Historians (memory mechanism) The Scanners (comparison of environment and internal states)

Creatures of Pure Will synthetic psychology and synthetic neuroethology: what are the simplest systems that exhibit mental phenomena? common assumption: movement required repetitive signals CPGs

Comparison are more complex CPGs advantageous? constant body three kinds of CPGs motor imperative (procedural) representations can be the product of evolution without indicative (declarative) sensory input!

Creatures of Pure Vision taxis: toward/away from stimulus (ex. positive phototaxis) kinesis: motion triggered/suppressed by stimulus (ex. running within some temperature range) CPGs + orientation neurons

2D/3D sensing in water it is not always good to represent more.

The Historians short-term memory: recurrent NNs memory = encoding, maintenance, retrieval retrieval: how to utilize stored information? but the problem is known in nature: E. Coli –so small that it cannot use multiple sensors –but has to determine the direction of greatest concentration of nutrient –it memorizes concentration (internal states!) –changes heading when lower concentration detected

Memory implementation 8-neuron propagation delay representation of spatial distance from stimulus –no memory: two sensors (difference) –memory: single sensor!

Memory vs. no memory The Memorians construct representation of 2D location of food: unknown The Memorians utilize representation of the past: sure (encode, maintain, retrieve)

Question #1 do they compare delayed (memory) and current (perception) signal, or is the delayed signal only useful? –verification: memory buffer works (all weights nonzero) –removal of some connections

Question #2 if delay buffer weights set to 0, will they evolve to be non-zero? –yes.

Behavior of The Historians pirouette motion similar to C. Elegants worms, which use gradient navigation...nematodes use similar kind of memory to the one evolved in Framsticks?

The Scanners radar: single sensor mounted on a long limb, used as an oscillating scanner CPG to control movement and radar position orientation muscle controlled by a NN with a sensor and radar position information

2D stimulus location smell sensor: active – food is where radar is directed smell sensor: inactive – food is elsewhere or correlation of smell sensor activity and radar control command

Comparison similar performances similar behavior? –all used only smell sensor? –all used all the information available? yes: non-zero weights lesion study

Evolved representations areof Pure Willmotor commands from CPGs patterns of muscular movement Pure Vision states of sets of sensory transducer neurons and signals they passed to orientation muscles current egocentric locations of food sources in 1D/2D/3D Historians as Pure Vision, plus memory buffers as Pure Vision, plus past locations of food Scannersas Pure Vision plus Pure Will

References to philosophical theories and paradigms teleological informational approach (Dretske 1995) isomorphism approaches (Cummings 1996) temporal evolution of neurosemantics (Millikan 1996) egocentric/allocentric representations

Critics simulations too constrained? –no, although more sophisticated scenarios will be useful. Still much to be done in simple simulations. simulations are mere simulations! –abstract from real phenomena and may leave out crucial features a danger not peculiar to computer models, but all models. –computer simulation is not real, so it is virtual, and thereby fictional nothing is real in computers!? no, computers are material.