Bayesian goal inference in action observation by co-operating agents EU-IST-FP6 Proj. nr. 003747 Raymond H. Cuijpers Project: Joint-Action Science and.

Slides:



Advertisements
Similar presentations
Max Planck Institute for Human Cognitive and Brain Sciences Distinguishing between self and other: How shared are shared representations? Marcel.
Advertisements

Movement planning in Developmental Coordination Disorder
Giacomo Rizzolatti and Corrado Sinigaglia. Basic knowledge Mirror mechanism Unifies perception and action Its functional role depends on its anatomical.
Naïve Bayes. Bayesian Reasoning Bayesian reasoning provides a probabilistic approach to inference. It is based on the assumption that the quantities of.
Bayesian Network and Influence Diagram A Guide to Construction And Analysis.
What are Mirror Neurons? Mirror neurons are cells that fire when a monkey (or person?) performs an action or when it views another animal performing that.
Week 11 Review: Statistical Model A statistical model for some data is a set of distributions, one of which corresponds to the true unknown distribution.
CA meeting Athens 2007 Raymond Cuijpers Ellen de Bruijn Hein van Schie Roger Newman-Norlund Majken Hulstijn Jurjen Bosga Ruud Meulenbroek Harold Bekkering.
Mirror Neurons.
University of Minho School of Engineering Centre ALGORITMI Uma Escola a Reinventar o Futuro – Semana da Escola de Engenharia - 24 a 27 de Outubro de 2011.
Action observation and action imagination: from pathology to the excellent sport performance.
SA-1 Robotic Self-Perception and Body Scheme Learning Jürgen Sturm Christian Plagemann Wolfram Burgard University of Freiburg Germany.
Probabilistic Models of Cognition Conceptual Foundations Chater, Tenenbaum, & Yuille TICS, 10(7), (2006)
Population vector algorithm
Yiannis Demiris and Anthony Dearden By James Gilbert.
Faculty of Management and Organization Emergence of social constructs and organizational behaviour How cognitive modelling enriches social simulation Martin.
SA-1 Body Scheme Learning Through Self-Perception Jürgen Sturm, Christian Plagemann, Wolfram Burgard.
Quantifying Generalization from Trial-by-Trial Behavior in Reaching Movement Dan Liu Natural Computation Group Cognitive Science Department, UCSD March,
Mirror Neurons.
From Perception to Action And what’s in between?.
Final Review Session Neural Correlates of Visual Awareness Mirror Neurons
Introduction  Bayesian methods are becoming very important in the cognitive sciences  Bayesian statistics is a framework for doing inference, in a principled.
LEARNING FROM OBSERVATIONS Yılmaz KILIÇASLAN. Definition Learning takes place as the agent observes its interactions with the world and its own decision-making.
1 Learning from Behavior Performances vs Abstract Behavior Descriptions Tolga Konik University of Michigan.
L ABORATORY FOR P ERCEPTUAL R OBOTICS U NIVERSITY OF M ASSACHUSETTS A MHERST D EPARTMENT OF C OMPUTER S CIENCE Intent Recognition as a Basis for Imitation.
Models of Human Performance Dr. Chris Baber. 2 Objectives Introduce theory-based models for predicting human performance Introduce competence-based models.
Baysian Approaches Kun Guo, PhD Reader in Cognitive Neuroscience School of Psychology University of Lincoln Quantitative Methods 2011.
Thanks to Nir Friedman, HU
Statistical Natural Language Processing. What is NLP?  Natural Language Processing (NLP), or Computational Linguistics, is concerned with theoretical.
Function Approximation for Imitation Learning in Humanoid Robots Rajesh P. N. Rao Dept of Computer Science and Engineering University of Washington,
Percevoir l'imperceptible par l'action : l'exemple du mouvement biologique Christel Ildéi-Bidet, Alan Chauvin & Yann Coello Université Charles De Gaulle,
Body Expression of Emotion (BEE)
Bayesian Learning By Porchelvi Vijayakumar. Cognitive Science Current Problem: How do children learn and how do they get it right?
1 / 41 Inference and Computation with Population Codes 13 November 2012 Inference and Computation with Population Codes Alexandre Pouget, Peter Dayan,
Anticipative and coordinated processes for interactivist and Piagetian theories Jean-Charles Quinton University of Toulouse (France) Computer science research.
Key Centre of Design Computing and Cognition – University of Sydney Concept Formation in a Design Optimization Tool Wei Peng and John S. Gero 7, July,
Intention Detection and Mirror Neurons
Social Cognition January 16, Definitions Social cognition – structures of knowledge, the processes of knowledge creation, dissemination, and affirmation,
Project ArteSImit Artefact Structural Learning through Imitation (TU München, U Parma, U Tübingen, U Minho, KU Nijmegen) Giorgio Panin - TUM.
University of Windsor School of Computer Science Topics in Artificial Intelligence Fall 2008 Sept 11, 2008.
Natural Tasking of Robots Based on Human Interaction Cues Brian Scassellati, Bryan Adams, Aaron Edsinger, Matthew Marjanovic MIT Artificial Intelligence.
1 Cognitive Interaction Technology Center of Excellence Why do we find hysteresis effects? Why do people persist to a former grasp type? Movement planning.
De Montfort University, Leicester, 1st April Richard P Cooper Department of Psychological Science Forward and Inverse Models in Motor.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Cogs 107b – Systems Neuroscience lec0305 –’meta’ motor control “Why, anybody can have a brain. That's a very mediocre commodity.
BCS547 Neural Decoding. Population Code Tuning CurvesPattern of activity (r) Direction (deg) Activity
BCS547 Neural Decoding.
4 Proposed Research Projects SmartHome – Encouraging patients with mild cognitive disabilities to use digital memory notebook for activities of daily living.
Chapter 7. Learning through Imitation and Exploration: Towards Humanoid Robots that Learn from Humans in Creating Brain-like Intelligence. Course: Robots.
Motor Control Theories.  1. The patterning of body and limb motions relative to the patterning of environmental objects and events.
Accurate Robot Positioning using Corrective Learning Ram Subramanian ECE 539 Course Project Fall 2003.
Chapter 16. Multiple Motivations for Imitation in Infancy in Imitation and Social Learning in Robots by Mark Nielsen and Virginia Slaughter JIHYUN LEE.
Chapter 1. Imitation: Thoughts about Theories in Imitation and Social Learning in Robots, Humans and Animals, Nehaniv and Dautenhaln. Course: Robots.
VEHICLE INTELLIGENCE LAB
Transfer Learning in Sequential Decision Problems: A Hierarchical Bayesian Approach Aaron Wilson, Alan Fern, Prasad Tadepalli School of EECS Oregon State.
6 th Framework Programme - Priority 2 “Information Society Technologies” FP6/2003/IST/2 Joint Action Science and Technology JAST - FP Joint-Action.
Probabilistic Robotics Introduction Probabilities Bayes rule Bayes filters.
Matjaž Gams Jozef Stefan Institute, Ljubljana University Slovenia.
Chapter 4 Motor Control Theories Concept: Theories about how we control coordinated movement differ in terms of the roles of central and environmental.
6 th Framework Programme - Priority 2 “Information Society Technologies” FP6/2003/IST/2 Joint Action Science and Technology JAST - FP Joint-Action.
Functionality of objects through observation and Interaction Ruzena Bajcsy based on Luca Bogoni’s Ph.D thesis April 2016.
San Diego May 22, 2013 Giovanni Saponaro Giampiero Salvi
Accurate Robot Positioning using Corrective Learning
Course: Autonomous Machine Learning
A Hierarchical Bayesian Look at Some Debates in Category Learning
AI and Agents CS 171/271 (Chapters 1 and 2)
Emir Zeylan Stylianos Filippou
The Organization and Planning of Movement Ch
Bayesian vision Nisheeth 14th February 2019.
Will Penny Wellcome Trust Centre for Neuroimaging,
Presentation transcript:

Bayesian goal inference in action observation by co-operating agents EU-IST-FP6 Proj. nr Raymond H. Cuijpers Project: Joint-Action Science and Technology (JAST) Nijmegen Institute of Cognition and Information Radboud University Nijmegen The Netherlands

Outline About Joint Action The problem of action observation –Simulation theory –Goal inference –Ingredients functional model Functional model of goal inference –Scenario –Architecture Simulation results Conclusions

Joint action Multiple levels of co-ordination –Kinetic: force, timing –Kinematic: speed, trajectory –Action level: what to do? –Goal level: for what purpose? –Reasoning: how to achieve destination? Actions of co-actors typically differ –Action observation –Anticipation of behaviour of co-actor Common (ultimate) goal –Action sequences –(immediate) action goal inference About Joint Action

The problem of action observation

How can we infer the observed action? Simulation theory: use own motor system to simulate actions of other Examples: Motor control theory: –Forward modelling: Predict consequences of actions –Action observation: predict observed action from action repertoire Robotics: –Direct mapping of observed joint angles on those of own action repertoire Problem: Requires similar effectors and kinematics Perception depends on viewpoint The problem of action observation

Mirror Neurons Rizzolatti, Fadiga, Gallese, & Fogassi (1996). Mirror neurons fire both during observation and execution of similar actions Evidence for simulation theory  Shared resources for performing and observing actions Ideomotor compatibility Brass, Bekkering, Wohlschlaeger, & Prinz (2000). Lift finger indicated by symbol Response is faster when performing congruent actions  Action system is used in action observation The problem of action observation

How can we recognise a dog catching a frisbee? Different body Observed effector differs from own effector (mouth vs. hand) Different kinematics  Direct mapping of joint angles is impossible  Forward modelling is impossible Inference must occur at more abstract level: goal inference The problem of action observation

Imitation of 14-months infants Gerely, Bekkering, & Kiraly (2002). Nature. Infants imitate with hands when the actor’s hands were occupied Evidence for goal inference Hands occupiedHands free Imitate using handsImitate using head  Imitate action goals rather than the effector The problem of action observation

Evidence for goal inference Mirror neurons Fogassi, Ferrari, Gesierich, Rozzi, Chersi & Rizzolatti (2005). Science 308: Firing rate during grasping depends on subsequent movement Activity is selectively tuned to the action goal (=destination of food) The problem of action observation

Ingredients for functional model Viewpoint invariance –Use viewpoint independent measures (distance, colour) Infer action goals (=intended state change of the world) –make decision at goal level –consistent with final goal state of a sequence of acts Use your own action system for observation –Use own action repertoire –Use own preferences –Use own task knowledge assume common The problem of action observation

Functional model of goal inference during action observation Cuijpers RH, Van Schie HT, Koppen M, Erlhagen W and Bekkering H (2006) Goals and means in action observation: a computational approach. Neural Networks 19:

Sequence of primitive motor acts (screw nut, put bolt through hole) Observable current state and final goal state (final construction) Shared task knowledge (action repertoire, action goals) Not shared: Action sequence, viewpoint and personal preferences Two agents co-operatively build a model from Baufix building blocks ? Initial stateFinal goal state Model of action goal inference

ActorObserver Model of action goal inference (Cuijpers et al., 2006) Belief that goal is red-bolt-screwed-in-green-nut Belief that action is to screw red bolt in green nut Likelihood that hand moves to red bolt Belief that hand moves to red bolt Observation Decision marginalisation rule Bayes rule Model of action goal inference

Two fundamental processes Turn evidence into beliefs (Bayes’ rule) Belief propagation (marginalisation rule) Model of action goal inference Pr(screw red bolt in green nut) = Pr( screw red bolt in green nut | if target is n ) x Pr( target is n )  Posterior belief Evidence Personal preference Pr( red bolt | observ. ) ~ Pr( observ.| if target is red bolt ) x Pr( red bolt ) Action level Knowledge own action repertoire Component level n

Viewpoint invariance Observations depend on viewpoint invariant measures –Distance between effector and target –Rate of distance change Model of action goal inference

Use your own action system Belief propagation uses task knowledge –Components required for each action alternative p(c n |A k ) –Action goal associated to each action alternative p(i  j|A k ) Use personal preferences (priors) –component preferences p(c n ) –Action preferences p(A k ) –Action goal preferences for a given final goal state p(i  j|f) Execution and observation share resources –Task knowledge –Personal preferences Model of action goal inference

Infer action goals rather than means Infer action goal beliefs p(i  j|o t,f) –Consistent with final goal state f Make decision at goal level –Belief in action goal p(i  j|o t, f) > threshold Model of action goal inference

Simulation results

Scenario Joint Task: Actor: Action Goal: bolt through slat Action Alternative: c1+c5 First target: c1 Observer: infer goal c1 c2 c3 c4 c5 Simulation results

c1 c2 c3 c4 c5 Belief component c n is the target p(c n |o t ) Nearby targets are more likely unless movement speed is high Beliefs are biased by personal preferences c 1 correctly identified after 40% of movement time (MT) Belief that goal is red-bolt-screwed-in-green-nut Belief that action is to screw red bolt in green nut Likelihood that hand moves to red bolt Belief that hand moves to red bolt Simulation results

Belief in action alternatives p(A k |o t,f) Only possible actions (task knowledge) Only actions consistent with goal state f (task knowledge) Action alternatives with nearby targets are more likely c1 c2 c3 c4 c5 Impossible! Belief that goal is red-bolt-screwed-in-green-nut Belief that action is to screw red bolt in green nut Likelihood that hand moves to red bolt Belief that hand moves to red bolt Simulation results Inconsistent

Belief in action goals p(i  j|o t,f) Inconsistent action goals are suppressed (task knowlegde) Correct action goal is inferred after 23% of MT The correct action goal is inferred before the action or the target component c1 c2 c3 c4 c5 Belief that goal is red-bolt-screwed-in-green-nut Belief that action is to screw red bolt in green nut Likelihood that hand moves to red bolt Belief that hand moves to red bolt Simulation results

Conclusion We made a functional model that captures behavioural and neurophysiological findings on action observation Missing knowledge about the co-actor is replaced by task knowledge from the observer’s own action repertoire To inference process is driven by the likelihood of observed movements and is biased by personal preferences Action planning is driven by the intended goal and by personal preferences As a consequence imitation need not involve the same effector (imitation) Actions are not directly mapped onto the observer’s repertoire. Consequently, complementary actions can be as fast as imitative actions in a joint action context

Thank you for your attention!

p(c n |o t ) ~ p(o t |c n ) p(c n ) p(i  j|A k,f) = p(A k |i  j) p(i  j|f)/ p(A k |f) p(A k |c n ) = p(c n |A k )p(A k )/p(c n ) p(A k |o t ) =  n p(A k |c n ) p(c n |o t ) p(i  j|o t,f) =  k p(i  j|A k,f) p(A k |o t ) Component belief  likelihood, preference Action belief  action knowledge, component belief Goal belief  goal knowledge, action belief