Presentation is loading. Please wait.

Presentation is loading. Please wait.

Action Causes Perception Causes Action: From Sensory Substitution to Situated Robots Lecture 3+4, Unit 5 NUCOG Seminar: Action, Perception, Motivation.

Similar presentations


Presentation on theme: "Action Causes Perception Causes Action: From Sensory Substitution to Situated Robots Lecture 3+4, Unit 5 NUCOG Seminar: Action, Perception, Motivation."— Presentation transcript:

1 Action Causes Perception Causes Action: From Sensory Substitution to Situated Robots Lecture 3+4, Unit 5 NUCOG Seminar: Action, Perception, Motivation Akureyri, Iceland 10.2.-20.2.2006 Marieke Rohde Centre for Computational Neuroscience and Robotics University of Sussex

2 Recapitulation Situated and Embodied View: –The closed sensorimotor loop –The rejection of (a priori) internal localisation of cognitive function –Sensorimotor coordination as reciprocally causal process. Empirical research –Perceptual suppleance (sensory substitution) –Change blindness –Delay experiments Sensorimotor contingencies –Descriptive concepts for a situated view. –Can be used to explain cognitive phenomena and faculties (e.g. perceptual modalities) without localising meaningful cognitive phenomena

3 This module Tuesday: 1.History and Motivation 2.The Importance of Situatedness: Empirical Evidence 3.A Sensorimotor Account Today: 4.Robotics 5.The Question of Value 6.Conclusion

4 4.) Robotics

5 Shakey “Shakey was the first mobile robot to reason about its actions” Shakey implemented the perception, planning, action approach. –Detailed world model –Three levels of complexity http://www.sri.com/about/timeline/sha key.html Video: http://www.ai.sri.com/movies/Shakey.ram

6 Braitenberg Cunningly simple No internal state – yet „cognitive“ behaviour One controller – radically different behaviours Braitenberg, V.: „Vehicles. Experiments in Synthetic Psychology.“ illustrations by Maciek Albrecht, MIT Press, 1984

7 The Case of Walking Why is walking so easy for us, and so difficult for robots? (see e.g. Honda’s Humanoid ASIMO: http://world.honda.com/ASIMO/http://world.honda.com/ASIMO/ )

8 Passive dynamic walking at Cornell: http://www-personal.engin.umich.edu/~shc/movies/passive_angle.mov Slowly introducing activity (power) in simulation: http://www.droidlogic.com/sussex/dphil/movies/fullspine3_front.mov Take inspiration from airplanes: Gradually add control to gliding. Passive Dynamics

9 A Lesson from Robotics In Shakey, a theoretical model was „acid tested“ (with little success) The actual problems had not been recognised. A controller that is no controller at all outperforms Shakey effortlessly Close coupling instead of „dead reckoning“ Detail through detailed representation. Classical approaches focus on what people are bad at and computers are good at (logics, mathematics, chess...). They fail to account for what people are good at, but computers are bad at. The rise of Behaviour Based Robotics: e.g. Brooks: „Intelligence Without Reason“, subsumption architectures.

10 Evolutionary Robotics

11 Advantages Integrated sensorimotor systems –Close coupling between agent and environment (which is typically bypassed or modelled very poorly). Control (and minimise) prior assumptions (prejudices) –About what internal structure is necessary to solve a task –About what kind of functional decomposition underlies the mastery of a task –About which strategy is applied to solve a task –Goes beyond human ingenuity, particularly with respect to complex nonlinear dynamics

12 Minimally Cognitive Behaviour Beer, R.D. (2003). Minimally cognitive Behaviour – what and why –Raises genuine cognitive interest –Minimal complexity on which we can build up systematically –Dynamical Explanation of brain, body, world interaction –„Intellectual Warm-Up“, „Frictionless Brains“

13 Categorical Perception Task –Circular Agent –Recurrent Neural Network Controller (CTRNN) –Move left and right –Distance sensor array –Objects fall from the sky –Catching Circles, avoiding Diamonds –(symmetry)

14 Behavioural Explanation Foveate Object Scan Left and Right Circle: scanning movements smaller and smaller Diamond: large avoidance movement

15 What the agent sees...

16 „Carrot on a stick“

17 „Psychophysics“ Labeling and Discrimination Discrimination Criteria (Width!)

18 Psychophysics When is the decision made?

19 Dynamical Explanations Hardcore Mathematics Dynamics of agent environment system

20 Memory Izquierdo-Torres & Di Paolo (2005): –Is a reactive agent capable of only solving reactive tasks? –Reactive task: what is to do is immediately obvious from the sensory data

21 Memory Izquierdo-Torres & Di Paolo (2005): –Same Task, same settings (symmetry etc.) –Feed Forward network without decay –Perfect mastery, even with respect to „psychophysics“ –An embodied and situated agent is never purely reactive: Agents modify their position with respect to the objects and thereby partially determine their sensory perception at the next time step

22 More Cases I will just brush over these...

23 Active Vision Coevolution of active vision and feature selection (Floreano et Al. 2004) –A feedforward network masters complex behaviour relying on vision Extension: Development under motor disruption (Floreano et Al. 2004b) –Analogous to Held‘s Kitten caroussel

24 Learning Dynamics Tuci et Al. (2002) –Recurrent neural networks are dynamical networks, i.e. They have neural state –Neural network learning is normally thought to happen through a different process (i.e. Synaptic learning) –Learning in a fixed weight neural network –Floor sensor, light sensor: find out at the beginning of 14 trials if you are in a „landmark near“ or „landmark far“ environment

25 Communication and Social Interaction Matt Quinn (2001): –Origins of Communication: Making signal only if it is understood. But: the first time made it will not be understood. –Dedicated Channels? No prior assumptions about how to communicate –Homogenous population allocates roles („leader“, „follower“) –Minimal sensory and motor equipment (Only distance, 5 cm range, noisy)

26 Where‘s the Talking Robots? People tend not to be impressed with this I am not impressed with AIBO, ASIMO, the Sony Humanoid,... (well engineeringwise, I am impressed) The biggest issues in current cognitive robotics (e.g. Robocup) are still the ones that we get for free. Timing issues: Blur, Slip, Delays... Manufacturing robots are normally controlled according to dynamical principles

27 What Does that Prove? Following up on David‘s question –„When an ER experiments replicates some cognitive capacity of a human or animal,typically in simplistic and minimal form, what conclusions can be drawn from this?“ (Harvey et Al., 2005)

28 Answers Harvey et Al. (2005): Existence proof –sufficient conditions to generate behaviour x –catalysing theoretical re-conceptualizations –Facilitating the production of novel hypotheses Di Paolo et Al. (2000): Opaque Thought Experiment –Results follow from Premises – but in a non obvious way –Empirical Flavour: must be observed and understood –Go beyond human ingenuity and thus Make a stronger case Can uncover novel concepts, relations etc. to be incorporated in a theory

29 Conclusions so far What all these experiments show is that behavioural and cognitive phenomena that are typically put in distinct boxes in psychology, can be realised by a system whose mechanic structure is very different (orthogonal) to the structure of the behaviour space. These mechanisms have a tendency to be more efficient (i.e. computationally cheaper), and exploitation of a close coupling to the environment is part of this advantage. My question: Why would nature/evolution box up functional mechanisms?

30 5.) Values

31 The Problem Why is light meaningless to a Braitenberg vehicle? What constitutes genuine purpose, genuine values, genuine intentionality?

32 A Look at Biology Living organisms have genuine purposes. They care for their survival. They have to, otherwise they would not live. Survival is not „for free“ as it is in the case of the robot. They cannot be reprogrammed What is good or bad to them is not down to interpretation. –Can you redefine what is reward and what is punishment for a living organism? –Can there be conventions about what is harmful for a living organism?

33 I could never put it as nicely… “the ill person that cannot express himself anymore, animals, yes, even a paramecium that cramps before it is killed by the picric acid dribbled under the cover slip, the saddening look of a limp plant, the foetus that defends itself with hands and feet against the instruments of the doctor - they all present the meaning of that what happens to them. The meaning is explicitly evident in the gestures.“ (Weber, 2003. p. 149, my translation)

34 Autopoiesis Maturana and Varela (1980): operational definition of the living. Definition: a network of processes of production (synthesis and destruction) of components such that these components: 1.continuously regenerate and realize the network that produces them, and 2.constitute the system as a distinguishable unity in the domain in which they exist (Weber & Varela, 2002, p. 115)

35 Is that enough? Autopoiesis just accounts for robustness. –What is dying? Illness? Stress? –A merely autopoietic system has no reasons to improve the conditions for its continued existence.

36 Adaptivity (Di Paolo, forthcoming) “a system’s capacity, in some circumstances, to regulate its states and its relation to the environment with the result that, if the states are sufficiently close to the boundary of viability, 1.Tendencies are distinguished and acted upon depending on whether the states will approach or recede from the boundary and, as a consequence, 2.Tendencies of the first kind are moved closer to or transformed into tendencies of the second and so future states are prevented from reaching the boundary with an outward velocity.”

37 Values Metabolic value, as an end, and a criterion for judgment seems reasonable. Maybe it is enough to explain the behaviour of the most simple organisms. However, not all our judgments or all our actions seem to be measurable against metabolic value. Do all values derive from metabolic value?

38 Our Definition of Value “We propose to define value as the extent to which a situation affects the viability of a self-sustaining and precarious process that generates an identity” (Di Paolo & Rohde, work in progress) Note: There is reciprocal causality!!!! Which other values could there be? Do non-metabolic values parasite on metabolism? Could there be values without a metabolism?

39 Value System Architectures Edelmann‘s Theory of neuronal group selection (e.g. Edelmann (1989), Sporns & Edelmann (1993)) –Neural circuits are selected due to principles of Darwinian evolution during lifetime –Selection through a value signal –E.g. Reaching: „good“ if hand close to target –Reinforcement learning with internally generated reinforcement signal Very popular with Pfeifer et Al. (e.g. Pfeifer and Scheier 1999)

40 The Value System ``[if] the agent is to be autonomous and situated, it has to have a means of `judging' what is good for it and what is not. Such a means is provided by an agent's value system.'' (Pfeifer and Scheier 1999) ``already specified during embryogenesis as the result of evolutionary selection upon the phenotype'' (Sporns and Edelmann 1993).

41 Rephrasing it: There is a structure that knows what is good and bad (a priori) The rest of the organism/learning mechanism is ignorant and obeys blindly This localised structure, by necessity needs to have dedicated input and output channels Value systems themselves do not learn, they control learning (functional division) Or if, they learn through a „higher level“ value system (regressus ad infinitum?) A „VISTIGIAL GHOST IN THE MACHINE“!!!! (Rutkowska 1997)

42 What is wrong with value systems? The principle objection: –Values are arbitrary. –Values are generated seperately. –Values are specified a priori. –Values are not subject to change themselves. –What happened to the reciprocal causality? Think about: sensory substitution, goggle experiments... Think about: social/abstract values and the requirement for „simple criteria of salience and adaptiveness“ ()

43 These are not genuine values!!!

44 What else is wrong with value systems? Are value systems good to model values? –Investigation of „pseudo values“ –Models and idealisation: To remove gravity from a model of balloon flight is simply to do away with the original problem we wished to solve. So what is wrong? –Vulnerability –Generality/Specificity trade-off –Buckpassing explanatory burden –False dilemmas: analoguous to nature/nurture divide (Oyama 1985) –The impossibility of novel values

45 Damasio ``[somatic markers] help us thinking, by illuminating some (dangerous or beneficial) options in the right way so they are quickly removed from further reasoning. You can imagine this as an automatic system for the evaluation of predictions'' (my translation from German translation of Descartes error) :$ ``we are born with the neuronal mechanism necessary to generate somatic states facing certain classes of stimuli - the apparatus of primary emotions.”

46 Emotion systems conceived as forming a complementary system to colder or more detached cognitive processes kind of “early warning system” that directly monitors bodily conditions to generate states that modulate all kinds of interactions and internal dynamics. Again: –a priori built-in rules –emotional states –functional division between the emotion system and other emotion--free cognitive processes

47 Just to make this very clear: Nobody denies that such mechanisms can and possibly do work in situations that rely on ontogenetically or phylogenetically preestablished situations. Both value and emotion systems provide the other cognitive mechanisms with information of the relative relevance of their activities and future choices. This cannot account for the generation of novel values.

48 How else could you model values Reciprocal Causality between value and value appraising agent: –Dynamics –Plasticity –Situatedness –No functional separation

49 Evolutionary Robotics First step: The phototactic homeostatic robot. (Di Paolo 2000, 2003)

50 Trying to get a grip on „value signals“ The fitness evolving robot –Evolve robot to perform a task (phototaxis) and a signal that represents a fitness estimate. –The fitness estimate is a standard neuron –Give the robot an environment that requires adaptation (e.g. Sensor swapping) How does the behaviour relate to the „value signal“? How does the „value signal“ relate to the behaviour?

51 6.) Conclusions

52 Summary The situated and embodied approach to cognition and how it differs from the „cognition as information processing“ metaphor. How empirical research supports the situated and embodied view –Perceptual perturbations –Perceptual suppleance (sensory substitution) –Change blindness –Delay experiments The concepts and language that Noe, O‘Regan and Hurley contribute to the situated, embodied and dynamical study of cogntion. In particular, the idea of sensorimotor contingencies and the mastery of the laws of sensorimotor contingency.

53 Summary How findings from robotics research, particularly from evolutionary robotics, can inform cognitive science How behaviour can be explained dynamically (i.e. Using dynamical systems theory) How true genuine value appraisal cannot be produced by a box of judgment rules What biology teaches us about metabolic (and other?) values. How possibly to approach a complex and rich phenomenon like value appraisal through computational modelling.

54 Thanks are Due to Ezequiel Di Paolo Sarah Angliss Inman Harvey Bill Bigge

55 Any questions?

56 References Beer, R.D. (2003). The dynamics of active categorical perception in an evolved model agent (with commentary and response). Adaptive Behavior 11(4):209-243. Braitenberg, V.: „Vehicles. Experiments in Synthetic Psychology.“ illustrations by Maciek Albrecht, MIT Press, 1984 Collins, S.: Passive Dynamic Walking at Cornell University (retrieved 13.2.2006). Information: http://ruina.tam.cornell.edu/hplab/pdw.html video: http://ruina.tam.cornell.edu/hplab/downloads/movies/Steve_angle.mov Damasio, A. R:. Descartes' Irrtum. Fühlen, Denken und das menschliche Gehirn. München: DTV, 2001 Di Paolo, E. A., Adaptive Systems lecture presentations, University of Sussex (UK), Spring Term 2006. Di Paolo, E. A., (2005). Autopoiesis, adaptivity, teleology, agency. Phenomenology and the Cognitive Sciences. Forthcoming. Di Paolo, E. A., (2003). Organismically-inspired robotics: Homeostatic adaptation and natural teleology beyond the closed sensorimotor loop, in: K. Murase & T. Asakura (Eds) Dynamical Systems Approach to Embodiment and Sociality, Advanced Knowledge International, Adelaide, Australia, pp 19 - 42. Di Paolo, E. A., Noble, J., & Bullock, S. (2000). Simulation models as opaque thought experiments. In Bedau, M. A., McCaskill, J. S., Packard, N. H., & Rasmussen, S. (Eds.), Articial Life VII: Proceedings of the Seventh International Conference on Articial Life, pp. 497-506. MIT Press, Cambridge, MA. http://citeseer.ist.psu.edu/dipaolo00simulation.html Di Paolo, E. A., (2000). Homeostatic adaptation to inversion of the visual field and other sensorimotor disruptions. Proc. of SAB'2000, MIT Press. Edelman, G.: The Remembered Present. A Biological Theory of Consciousness. Basic Books, New York 1989. Floreano, D., Kato, T., Marocco, D. and Sauser, E.: Coevolution of Active Vision and Feature Selection. Biological Cybernetics, 90(3) 2004. 218-228. Harvey, I.: Various presentations and lecture material

57 References Harvey, I., Di Paolo, E., Wood, R., Quinn, M. and Tuci, E. A. (2005). Evolutionary Robotics: A new scientific tool for studying cognition. Artificial Life, 11(1-2):79-98. Harvey, I., Husbands, P., Cliff, D., Thompson, A. and Jakobi, N. (1996). Evolutionary Robotics at Sussex. In Robotics and Manufacturing: Recent trends in research and applications (Proc. World Automation Conf. WAC'96), pages 293-298. New York: ASME Press. Honda‘s Asimo (retrieved 13.2.2006): http://world.honda.com/ASIMO/ Maturana, H.R. And F.J. Varela „Autopoiesis and cognition: the realization of the living.“ Reidel 1980 Oyama, S.: The Ontogeny of Information. Developmental Systems and Evolution. Cambridge University Press, Cambridge 1985. Quinn, M. (2001). Evolving communication without dedicated communication channels. In Kelemen, J. and Sosik, P., editors, Advances in Artificial Life: Sixth European Conference on Artificial Life: ECAL2001, pages 357-366. Springer. Rutkowska, J.: What's value worth? Constraining Unsupervised Behaviour Acquisition. In: Proc. of the Fourth European Conference on Artificial Life 1997. 290--298. Shakey the robot (Stanford Research institute) retrieved 13.02.2006 http://www.sri.com/about/timeline/shakey.html Tuci, E., Quinn, M. and Harvey, I. (2002). Evolving fixed-weight networks for learning robots. In Congress on Evolutionary Computation: CEC2002, pages 1970-1975. IEEE Press. Sporns, O., and G.M. Edelman: Solving Bernstein's Problem: A Proposal for the Development of Coordinated Movement by Selection. Child Dev. 64 (1993) 960--981. Suzuki, M., Floreano, D. and Di Paolo, E. A., (2005). The contributions of active body movement to visual development in evolutionary robots. Neural Networks. 18(5/6) pp. 657-666. Vaughan, E.‘s passive dynamic walking research (retrieved 13.2.2006) http://www.droidlogic.com/ Weber, A., & Varela, F. J. Life after Kant: Natural purposes and the autopoietic foundations of biological individuality. Phenomenology and the Cognitive Sciences, 1 (2002), 97-125. Weber, Andreas Natur als Bedeutung.Versuch einer semiotischen Theorie des Lebendigen. Königshausen & Neumann, 2003.


Download ppt "Action Causes Perception Causes Action: From Sensory Substitution to Situated Robots Lecture 3+4, Unit 5 NUCOG Seminar: Action, Perception, Motivation."

Similar presentations


Ads by Google