Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 IJCNN 2010 Panel Between Bottom-up and Top-Down What Is the Much in-between? Wednesday July 21, 4:50pm, Room 123 Presentation Panel Members: Paolo Arena.

Similar presentations


Presentation on theme: "1 IJCNN 2010 Panel Between Bottom-up and Top-Down What Is the Much in-between? Wednesday July 21, 4:50pm, Room 123 Presentation Panel Members: Paolo Arena."— Presentation transcript:

1 1 IJCNN 2010 Panel Between Bottom-up and Top-Down What Is the Much in-between? Wednesday July 21, 4:50pm, Room 123 Presentation Panel Members: Paolo Arena Edgar Koerner Asim Roy Ron Sun Yan Meng Giogio Metta Angelo Cangelosi Janusz A. Starzyk John G. Taylor Narayan Srinivasa Juyang Weng

2 2 Questions for the Penal 1. Does the brain use symbolic representation in a fashion similar to our symbolic AI? Why? What lesson can we learn? 2. Connectionist models have shown progress in pattern recognition, but they have been largely used as a classifier or regressor. Some said that artificial neural networks cannot perform reasoning. Is there "a light in the tunnel" for connectionist models to perform goal-directed reasoning? Why? 3. How do you think about the brain/artificial architecture for the autonomous development? As the brain is "skull-closed", how does it fully autonomously develop its internal representation from one task to the next? Between bottom and top, what is the much in-between?

3 3 Questions for the Penal 1. Does the brain use symbolic representation in a fashion similar to our symbolic AI? Why? What lesson can we learn? 2. Connectionist models have shown progress in pattern recognition, but they have been largely used as a classifier or regressor. Some said that artificial neural networks cannot perform reasoning. Is there "a light in the tunnel" for connectionist models to perform goal-directed reasoning? Why? 3. How do you think about the brain/artificial architecture for the autonomous development? As the brain is "skull-closed", how does it fully autonomously develop its internal representation from one task to the next? Between bottom and top, what is the much in-between?

4 4 Embodied cognitive capabilities from simple brains Paolo Arena University of Catania ITALY

5 5 (1) L. Chittka and J. Niven, Are bigger brains better? Current Biology 19, 2009 (2) Shettleworth, S.J. 2001: Animal cognition and animal behaviour, – Animal Behaviour 61: (3) van Swinderen, B. and Greenspan, R.J. (2003) Salience modulates 20–30 Hz brain activity in Drosophila. Nat. Neurosci. 6, 579–586 (4) V. Srinivasan, Evidence for counting in insects, Anim Cogn (2008) (5) R. Strauss et al. Analysis of a spatial orientation memory in Drosophila, Nature 453, , 2008 (6) V. Srinivasan et al. Grouping of visual objects by honeybees J. Exp. Biol., 2004 (7) Jan Wessnitzer*, Place memory in crickets, Proc. R. Soc. B (2008) Simple animals (insects) are not only reflex automata They show (individually) capabilities like numerosity (4), attention and categorisation-like processes (6), sameness vs difference (1), water maze solution (7) considered as clear traits of cognition (2) in a limited neuron number ( ) and simpler organization than mammals. Drosophila Melanogaster is a cognitive poor cousin of the Bee, but genetic tools are available to unravel the structure-function-behavior loop (5) Some Capabilities of a simple Brain

6 6 Orientation object detection classification memory Olfactory learning Context generalization from visual info Behav. Evaluation Toward an insect brain modelEnvironment

7 7 Heisenberg, Nature, 2003 Some key aspects of simple Brains TIMETIME Rabinovich et al., 2003 J. Comp. Neurosci. Axo-axonal horizontal connections among KC in locust MBs for space-time processing Advanced polymodal Calices (Bee). Strausfeld et al. J. Comp. Neurology (2009) Connectionist models of simple brains can show goal-directed reasoning as: Simple forms of sequence learning based on cascades of environmentally induced correlations …following - P. Arena et al. IEEE TRANS. on N.NET.(2009)

8 8 From sensory data distribution to symbol-like representation: What is in-between ? Edgar Koerner Honda Research Institute Europe Offenbach, Germany

9 9 From signal to symbol processing prediction modulates processing based on acquired knowledge about scene and context short- and long-term memory of scenes, actions and its outcomes predicted distance warning learned cue relevance from environment (day, night, rain ) use of domain knowledge e.g. predicted road area predicted car positions knowledge of expected features context / task learn multilateral topological interaction weight unique neuronal & topological task dependent fusion strategy learn cues for simple reactive systems, direct linking works sufficiently good How to link sensory data distribution to symbolic knowledge representation ? intermediate representations seem to be required for robust and flexible problem solving in a complex environment How to define intermediate representations - by sensory data structure? Complex interaction in real world environment requires anchoring of knowledge representation to sensory data distributions

10 10 What determines intermediate representations in the brain? The cortex stores in its hierarchical structure a hierarchy of representations, like features, objects, scenes, concepts, … The basic wiring of the representational hierarchy is genetically encoded, referenced to the subcortical behaviour control. This is a-priori knowledge, acquired while interacting with the environment during phylogenetic development. Learning only fills in the blanks, adding specific experience based on sensory data. Fellemann & van Essen 1991 What is stored at which location in the cortex - is not defined by sensory data structure, - but by pointers from its phylogenetically older subcortical reference structure (behaviour control).

11 11 From activity distributions to symbols cortical area mcortical area k activity at the upper cortical layers is abundant (activity distribution), lower layers 5 & 6 show sparse activation patterns (symbol quality) The large pyramidal neurons of layer 5 (Py5) of the columnar structures are the read-out neurons to behaviour control. Without subcortical structures no behaviour! The cortex memorizes experience of successful behaviour for reuse under similar situations. Behaviour related subcortical inputs control processing and learning at the respective columns. The Py5 are fishing for useful information to control behaviour. Subcortical inputs can modulate all pyramidal neurons in a column.

12 12 Repetitive instantiation of symbols in the brain? There is no single symbolic knowledge representation level on top - Each behavioural output may instantiate a decision (symbolic quality) based on data distributions shaped by its afferent, efferent, and association inputs - Complexity of symbol content increases with hierarchic representation level Seemingly, there is no intermediate representation without a behavioural relevance Behaviour includes both l interaction with environment (overt) l internal simulation for planning and decision making (covert) hierarchic levels symbol activity distribution symbol

13 13 Behaviour control needs define the in-between l For defining the representational architecture of autonomously interacting systems we have to elucidate the relational architecture of the targeted behaviour, but not to start with data structure l The system does not describe the outside world from an observers point of view, but it describes its interaction with the environment from a subjective point of view – using the metric of its control space for defining the in-between levels of representation afferent data distribution symbolic knowledge representation afferent data distribution behaviour output Intermediate representations in the brain are not essentially defined by sensory data structure, but by the relational architecture of behaviour control.

14 14 Your Title Asim Roy Arizona State University Tempe, Arizona 85287, USA

15 15 Q1: «Does the brain use symbolic representation in a fashion similar to our symbolic AI? Why? What lesson can we learn?» l Is localist representation = symbolic representation? l Jeff Bowers has published a paper or two arguing for the viability of grandmother cells -- cells that represent wholegrandmother cells "objects" such as a specific face (or your grandmother's face) l Connectionists claim they are at the symbolic level too and can do symbolic computation (Smolensky and others) l Connectionists dont deny the symbolic level l The argument is not about symbolic level, but about the form and nature of representation and computation

16 16 References Bowers JS (2009). On the biological plausibility of grandmother cells: implications for neural network theories in psychology and neuroscience. Psychological review, 116 (1), PMID: Bowers JS (2010). More on grandmother cells and the biological implausibility of PDP models of cognition: a reply to Plaut and McClelland (2010) and Quian Quiroga and Kreiman (2010). Psychological review, 117 (1) PMID: Plaut, D., & McClelland, J. (2010). Locating object knowledge in the brain: Comment on Bowerss (2009) attempt to revive the grandmother cell hypothesis. Psychological Review, 117 (1), DOI: /a /a Smolensky, P. (1988). On the proper treatment of connectionism. The Behavioral and Brain Sciences, 11, Smolensky, P. (1995). Connectionism, constituency and the language of thought. In Macdonald, C., & Macdonald, G. (Eds.), Connectionism. Cambridge, MA: Blackwell.

17 17 Q2: «Connectionist models have shown progress in pattern recognition, but they have been largely used as a classifier or regressor. Some said that artificial neural networks cannot perform reasoning. Is there "a light in the tunnel" for connectionist models to perform goal-directed sensor-based reasoning? Why? What lesson can we learn?» l There is ongoing work on neural implementation of symbolic logic l Van der Velde and de Kamps have proposed a neural blackboard architecture for sentence structure analysis l It allows for combinatorial (or compositional) structures l Frank van der Velde and Marc de Kamps (2006). Neural blackboard architectures of combinatorial structures in cognition. Behavioral and Brain Sciences, 29 : 37-70Behavioral and Brain Sciences

18 18 Q3: «Symbolic AI architectures start from abstract concepts and connectionist architectures start from concrete receptors (e.g., pixels). Hybrid architectures use both. What is the "much in-between" --- between concrete sensory inputs and abstract concepts?» l Almost all useful knowledge is built bottom-up; thats why you have years and years of schooling and practice l You may be taught all the grammar rules of a language, but you wont be able to write good essays unless you practice for years and years l You can read all the books and watch all the video about playing a game (soccer, tennis or swimming), but you cant play the game unless you practice l OLD AI - rules are given to you and all you have to do is store and use them l But it doesnt work – try learning math without ever solving a problem l In many cases, rules are given to you to guide learning, but they are not part of the operational system l Abstract concepts need to be translated into operational networks l Roy, A On Connectionism, Rule Extraction and Brain-like Learning. IEEE Transactions on Fuzzy Systems, 8, 2,

19 19 Q4: «How do you think about the brain/artificial architecture for autonomous development? As the brain is "skull-closed," how does it fully autonomously develop its internal representations for the "much in-between" through experience, from one task to the next?» l NSF Report (2007) – This strong dependence on human supervision is greatly retarding the development and ubiquitous deployment autonomous artificial learning systems. l We need to develop autonomous algorithms – ones that dont depend on human babysitting for fine tuning of parameters l Its time to either fix our algorithms or invent new ones l We are interested in forming working groups to create new kinds of algorithms

20 20 Autonomous Learning and Development within CLARION How a hybrid cognitive architecture (CLARION, as a psychological theory) interacts with the world and learns Ron Sun

21 21 CLARION Theory: Basic Ideas (psychologically realistic, comprehensive theory of the mind) l Hybrid connectionist-symbolic system l Dual representation – dual-process theory of the mind (two levels or more --- implicit, explicit, meta-cognitive, etc.; Evans and Frankish, 2009) l Capable of both explicit goal-directed reasoning (e.g. in action decision making), as well as implicit reactive responses (from pattern recognition) l The two levels interact with each other l Bottom-up learning and top-down learning: key to autonomous learning and development l Motivational and meta-cognitive processes: secondary control of behavior

22 22 CLARION: Answers to the Questions l Does the brain use symbolic representation in a fashion similar to our symbolic AI? Why? What lesson can we learn? Yes, to a certain extent. But thats a tiny part. Majority is implicit, embodied, gradual, and/or reactive, as illustrated by CLARION and associated psychological evidence (see Sun, 2002; Sun et al., 2005, Psyc Rev; Helie and Sun, 2010, Psyc Rev). l Connectionist models have shown progress in pattern recognition, but they have been largely used as a classifier or regressor. Some said that artificial neural networks cannot perform reasoning. Is there "a light in the tunnel" for connectionist models to perform goal- directed reasoning? Why? Yes. Connectionist models can perform reasoning, implicitly or explicitly. See Sun (1994, 2002, 2011). But what is the point? Both bottom-up and top-down learning are needed for human learning. Their interactions are crucially important (for developing reasoning and beyond). l How do you think about the brain/artificial architecture for the autonomous development? As the brain is "skull-closed", how does it fully autonomously develop its internal representation from one task to the next? What is the much in-between? Humans learn through interacting with the world. In the process, rely heavily on implicit learning, bottom-up learning (as well as top-down learning), and on meta-cognitive control and regulation, on the basis of the motivational underpinnings (as in CLARION). Also learn through interacting with the social and cultural environment (imitation, communication, teaching, etc.).

23 23 A Few Pointers to the Literature on the CLARION Theory, Data, and Implementation l R. Sun, Anatomy of the Mind. Oxford University Press, New York l R. Sun, Duality of the Mind. Lawrence Erlbaum Associates, Mahwah, NJ l S. Helie and R. Sun, Incubation, insight, and creative problem solving: A unified theory and a connectionist model. Psychological Review, l R. Sun, Motivational representations within a computational cognitive architecture. Cognitive Computation, Vol.1, No.1, pp l R. Sun, P. Slusarz, and C. Terry, The interaction of the explicit and the implicit in skill learning: A dual-process approach. Psychological Review, Vol.112, No.1, pp l R. Sun, E. Merrill, and T. Peterson, From implicit skills to explicit knowledge: A bottom-up model of skill learning. Cognitive Science, Vol.25, No.2, pp

24 24 Between Bottom-Up and Top-Down What Is «the Much in-between»? Yan Meng Department of Electrical and Computer Engineering Stevens Institute of Technology Hoboken, NJ, USA

25 25 Q1: Does the brain use symbolic representation in a fashion similar to our symbolic AI? Why? What lesson can we learn? All representation are symbolic. A symbol represents something else by association, resemblance, or convention. The real questions is what level of abstraction is necessary to reproduce the functionality of a brain, assuming a perfect model? How can we enable an AI system to modify/extend its abstraction. Q2: Connectionist models have shown progress in pattern recognition, but they have been largely used as a classifier or regressor. Some said that artificial neural networks cannot perform reasoning. Is there "a light in the tunnel" for connectionist models to perform goal-directed sensor-based reasoning? Why? What lesson can we learn? It isnt the issue of capability, but rather efficiency. One can devise a case study that is narrow or simple enough to solve with a feasibly-sized ANN, that doesnt solve the underlying scalability problem nor bring us any closer to a system with general reasoning capabilities. ANN is useful as an interface between sensor inputs and more abstract reasoning modules in hybrid AI systems. Q3: How do you think about the brain/artificial architecture for autonomous development? As the brain is "skull-closed," how does it fully autonomously develop its internal representations from one task to the next? What is the much-in-between? Brain does not start as a random network that self-organizes based on experience, brain starts with some basic set of primitives and basic assumptions which must be highly self-extensible. New knowledge gained from experience/stimuli and existing knowledge used to interpret experiences.

26 26 What is Much in between? and Knowledge Organization There is a hierarchy of abstraction starting from the least abstract (raw sensory patterns) and progressing through multiple levels of increasing abstraction Each level is basically indentifying patterns in the level below itself Higher abstract concepts are built from relationships among and generalizations of lower abstract concepts (Y. Meng, Y. Jin, J. Yin, and M. Conforth, IJCNN 2010)

27 27 Fusing Bottom-up Learning and Top-down Learning in Object Recognition NN Architecture –One input layer, one output layer, multiple hidden layers –Independent bi-directional connections between adjacent layers Data flows –Bottom-up pathway generates hypotheses –Top-down pathway produces predictions –Fusion techniques integrate two-pathway info Learning –Gradient descent learning for both batch and online visual object recognition tasks (Y. Zheng, Y. Meng, and Y. Jin, IJCNN 2010)

28 28 … in between Giorgio Metta Italian Institute of Technology & University of Genoa Genoa, ITALY

29 29 Some of the in between (in the brain) Mirror neurons: motor neurons that are activated when seeing someone elses hand performing a manipulative action and in performing the same action Found also in f4, parietal cortex, but mirror effects more widespread that initially thought From: Fadiga, L., L. Fogassi, V. Gallese, and G. Rizzolatti, Visuomotor Neurons: ambiguity of the discharge or "motor Perception? Internation Journal of Psychophysiology, : p The type of action seen is relevant

30 30 Action and visual perception Understanding mirror neurons: a bio-robotic approach. G. Metta, G. Sandini, L. Natale, L. Craighero, L. Fadiga. Interaction Studies. Volume 7 Issue

31 31 Action and speech perception w/ motoraudio

32 32 Giorgios answers to the questions l Does the brain use symbolic representation in a fashion similar to our symbolic AI? Why? What lesson can we learn? Not necessarily, but the brain can build punctuated representations of events and objects via mechanisms similar to affordances (links between generated actions, target objects and perceived effects). l Connectionist models have shown progress in pattern recognition, but they have been largely used as a classifier or regressor. Some said that artificial neural networks cannot perform reasoning. Is there "a light in the tunnel" for connectionist models to perform goal-directed reasoning? Why? Manipulation of symbols (affordances) is reasoning and this manipulation can be implemented by means of artificial neural networks. l How do you think about the brain/artificial architecture for the autonomous development? As the brain is "skull-closed", how does it fully autonomously develop its internal representation from one task to the next? What is the much in- between? A possible architecture (also similar to Shanahans) sees two levels of generalization: first from sensorimotor patters to affordances and then from affordances to planning/reasoning.

33 33 Embodied Connectionism Angelo Cangelosi Centre for Robotics and Neural Systems University of Plymouth, UK

34 34 Embodied Cognition: The Case of Action and Language There are two opposing theoretical approaches to the study of language and cognition (in humans and cognitive systems/robots) 1. Language is autonomous and amodal (e.g. Fodor, Chomsky, Landauer & Dumais) 2. Language is integrated with cognition and grounded in the world/body (eg. Cangelosi & Harnad, Gallese & Lakoff, Pulvermuller, Glenberg, Barsalou, Ellis et al., Coventry & Garrod...)

35 35 Embodied Connectionism Classical Connectionism: 1.Fixed semantic feature representation of the environment (input and output units). 2.Network (organism) is passive and is exposed to frequency-based training protocols 3.Subsymbolic representations for symbol manipulation tasks Glenberg, A., (2005). Lessons from the embodiment of language: Why simulating human language comprehension is hard. In Cangelosi A., Bugmann G. & Borisyuk R. (Eds.) (2005). Modeling Language, Cognition and Action: Proceedings of the 9th Neural Computation and Psychology Workshop. World Scientific. No fixed set of semantic features can capture the [language embodiment] phenomena; instead the features needed for understanding (e.g., that a tractor affords raising the body, or that a crutch affords transfer of a soccer ball) arise from an interaction between types of bodies, human goals, and components of the situation. (Glenberg, 2005) Embodied Connectionism (Neuro-robotics): 1.Direct encoding of sensorimotor features (proprioception, vision, motors/actuators…) 2.Organism actively explores and interacts with the environment 3.Experiments on the development of language and action/language integration Seidenberg, Plaut et al.

36 36 Experiments on action and language and embodied connectionism Cangelosi et al (2010). Integration of action and language knowledge: A roadmap for developmental robotics. IEEE Trans. on Autonomous Mental Development.. The Modi experiment: Thinking with your body (Morse et al. 2010). Stimulus-Response Compatibility Effects and Microaffordance (Ellis et al. 2008; Macura et al. 2009).

37 37 Embodied Connectionism: Answers to the Questions l Does the brain use symbolic representation in a fashion similar to our symbolic AI? Why? What lesson can we learn? There are no AI-type symbols in the brain, but there are embodied representations and PDP mechanisms that guide symbol-manipulation phenomena. E.g. Words (symbols) are grounded in sensorimotor experience. l Connectionist models have shown progress in pattern recognition, but they have been largely used as a classifier or regressor. Some said that artificial neural networks cannot perform reasoning. Is there "a light in the tunnel" for connectionist models to perform goal-directed reasoning? Why? Yes. Neuro and developmental robotics allows us to experiment the integration and embodiment of higher-order cognitive skills (e.g. language, action, emotions) l How do you think about the brain/artificial architecture for the autonomous development? As the brain is "skull-closed", how does it fully autonomously develop its internal representation from one task to the next? What is the much in- between? Embodiment, evolution and development can explain the learning of sensorimotor and higher-order cognition capabilities

38 38 Mental Development and Representation Building through Motivated Learning Janusz A. Starzyk

39 39 Motor processing Sensory processing Rewards and subconscious processing Planning and thinking Motivation and goal selection Episodic memory Semantic memory Action monitoring Attention switching Sensory inputs Motor outputs Central executive Top-Down Bottom-Up Interaction

40 40 The goal is to reduce the dominant pain Abstract goals are created to reduce abstract pains Motivation is pain driven -+ Pain Dual pain + Thirsty Abstract pain glass of water – sensory input to abstract pain center Sensory pathway (perception, sense) Motor pathway (action, reaction) Primitive Level Level I Level II well - w. glass draw drink Activation Stimulation Inhibition Reinforcement Echo Need Expectation Connectionists Concept Creation

41 41 Motivation Driven Learning

42 42 What is Between Top & Bottom in the Brain? John G. Taylor Department of Mathematics, Kings College London

43 43 In-between Land l In the bottom: the dwarves working at the coal-face carving out features & sculpting actions l In the top: l Consciousness l Goals l Decisions l Emotion values l In between: l Internal models l Attention SC Parietal A Thal ACG SFG NBM

44 44 Sites of In-Between Land l Inferior Parietal (SMG+AG + …) (?) l IPS & Superior Parietal (attn IMC( l Superior Temporal (affordances/int models) l Supplementary & Premotor Cortices (internal models/mirror neurons) l Function: attn: to awareness l Function: int models: to mental simulation/reasoning (GNOSYS review: JGT et al, Imag & Vision Comp 27: , 2009)

45 45 Attention circuits: Ventral & Dorsal (Corbetta et al 2002, 5, 8) l Have 2 BOLD-based circuits: Dorsal (Endogenous) Ventral (Exogenous) -As circuit breaker? - By NE? [Corbetta et al, Neuron 58:306, 2008)]

46 46 Attention Copy Model of Consciousness (CODAM) IN Report Speed input to report Inhibit distracters Attention Attention copy signal (precursor) IN OUT Owner Amplify input Model of content + owner activity (CODAM, JGT 2000+) Bias

47 47 Overall Structure l Use functionally-defined modules (IMC, STM, feature analysers, action generators, corollary discharge systems& forward models, goals) l Connectivity genetically by chemical gradients l More data to support this from brain imaging (Bressler, Desimone) l Apply architecture to cognitive machines (JGT)

48 48 What is Between Top and Bottom in the Brain ? Narayan Srinivasa HRL Laboratories LLC Malibu, CA USA

49 49 Brain: Computation and Function Top-down and Bottom-up processing of signals in the brain is a mere abstraction Activity at any level can be transmitted to any other level via ascending and descending pathways Inhibition plays key roles in termination of a particular computation and enables temporal coordination by balancing excitation Signals propagated between brain areas via spikes with oscillations serving as means for packaging spike information Neuronal groups that are synchronized in spike activity form dynamic internal representations within brain areas – these groups are degenerate – more than one group can give same output Synaptic efficacy modified via spike timing and modulated via rewards or punishments affects the composition of neuronal groups and also the cortical gains during signal transmission Re-entrant signaling enabled by large number of reciprocal connections serves to bind activity of neuronal groups in various brain areas in space and time - either forced oscillations, resonant loops or via transient oscillatory coupling Four key functions performed by the brain: categorization, analogies, association and prediction

50 50 Key Aspects of Brain Computation Primary and secondary cortex in each modality NONSELF World signals Laminar Cortical Architecture Thalamocortical, Corticothalamic, Corticocortical Loops Degenerate representations Brainstem, hypothalamus, autonomic centers SELF Homeostatic systems & Proprioception Neural registration of internal signals (hormones) Neural registration of muscles & joint states Internal State Categorization Perceptual Categorization

51 51 Brainstem, hypothalamus, autonomic centers Primary and secondary cortex in each modality Internal State CategorizationPerceptual Categorization Correlation in septum, amygdala and hippocampus etc Signals from self and non-self are associated Memory of rewards and punishments during these associations are formed Value-Category Memory Key Aspects of Brain Computation

52 52 Primary and secondary cortex in each modality Perceptual Categorization Correlation in septum, amygdala and hippocampus etc Value-Category Memory Memory Formation in Frontal, Temporal and Parietal Areas Laminar Cortical Architecture Corticocortical Loops Degenerate representations Associations between value- category memory and perceptual categorizations Reentrant loop Conceptual Categorization Key Aspects of Brain Computation

53 53 Embodiment & Emergence of Behaviors Synaptic plasticity combined within laminar and subcortical structure interactions caused by embodiment in a non-stationary environment leads to emergence of complex behaviors Brainstem, Hypothalamus, autonomic centers Primary and secondary cortex in each modality Correlation in septum, amygdala and hippocampus etc Memory Formation in Frontal, Temporal and Parietal Areas SELFNON-SELF Basal Ganglia Cerebellum Motor Control Circuits Action

54 54 Brains Internal Representation: Mixtures of Sensory and Motor Signals Juyang Weng Embodied Intelligence Laboratory Department of Computer Science Cognitive Science Program Neuroscience Program Michigan State University East Lansing MI USA

55 55 A1 for Q1: There Seems No Symbol in the Brain Representation source from environment: Receptors Muscles and glands No master maps Diff. from: Treisman 80; VanEssen 93, Tsotsos 95 No meaning boundary Diff. from: Hawkins 09 and many other symbolic methods Weng IJCNN 2010

56 56 A3 for Q3: Brains Representation Weng & Luciw TAMD 2009 { ( x, z ) | x in X, y in Y } Top Z : actions Bottom X : receptors (e.g., pixels) In-between: Y as

57 57 A2 for Q2: ED Network for Intent-Directed Reasoning Epigenetic Developer (ED) Network: Each area: map (x, y) with (x i, y i ) for all i FA: Finite Automaton and Its probabilistic variants Proved a relation: From any FA, there is an ED Network => ED Networks can reason Marvin Minsky: ANNs cannot reason well Weng IJCNN 2010 Luciw &Weng IJCNN 2010

58 58 IJCNN 2010 Panel Between Bottom-up and Top-Down What Is the Much in-between? Wednesday July 21, 4:50pm, Room 123 Discussion Panel Members: Paolo Arena Edgar Koerner Asim Roy Ron Sun Yan Meng Angelo Cangelosi Janusz A. Starzyk John G. Taylor Narayan Srinivasa Juyang Weng Nik Kasabov


Download ppt "1 IJCNN 2010 Panel Between Bottom-up and Top-Down What Is the Much in-between? Wednesday July 21, 4:50pm, Room 123 Presentation Panel Members: Paolo Arena."

Similar presentations


Ads by Google