Modeling the Brain’s Operating System Dana H. Ballard Computer Science Dept. University of Austin Texas, NY, USA International Symposium “Vision by Brains.

Slides:



Advertisements
Similar presentations
Internal models, adaptation, and uncertainty
Advertisements

Visual feedback in the control of reaching movements David Knill and Jeff Saunders.
Functional Vision Assessment
Attention I Attention Wolfe et al Ch 7. Dana said that most vision is agenda-driven. He introduced the slide where the people attended to the many weird.
Control of Attention and Gaze in the Natural World.
The Use of Eye Tracking Technology in the Evaluation of e-Learning: A Feasibility Study Dr Peter Eachus University of Salford.
Inferring individual perceptual experience from MEG: Robust statistics approach Andrey Zhdanov 1,4, Talma Hendler 1,2, Leslie Ungerleider 3, Nathan Intrator.
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
Tracking multiple independent targets: Evidence for a parallel tracking mechanism Zenon Pylyshyn and Ron Storm presented by Nick Howe.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC. Lecture 12: Visual Attention 1 Computational Architectures in Biological Vision,
Jeff B. Pelz, Roxanne Canosa, Jason Babcock, & Eric Knappenberger Visual Perception Laboratory Carlson Center for Imaging Science Rochester Institute of.
Computational Vision Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
CS335 Principles of Multimedia Systems Multimedia and Human Computer Interfaces Hao Jiang Computer Science Department Boston College Nov. 20, 2007.
Attention. Looking without Seeing.
Jochen Triesch, UC San Diego, 1 COGS Visual Modeling Jochen Triesch & Martin Sereno Dept. of Cognitive Science UC.
Michael Arbib & Laurent Itti: CS664 – Spring Lecture 5: Visual Attention (bottom-up) 1 CS 664, USC Spring 2002 Lecture 5. Visual Attention (bottom-up)
Goals of Computer Vision To make useful decisions based on sensed images To construct 3D structure from 2D images.
Vision-based Navigation and Reinforcement Learning Path Finding for Social Robots Xavier Pérez *, Cecilio Angulo *, Sergio Escalera + and Diego Pardo *
The free-energy principle: a rough guide to the brain? K Friston Summarized by Joon Shik Kim (Thu) Computational Models of Intelligence.
MIND: The Cognitive Side of Mind and Brain  “… the mind is not the brain, but what the brain does…” (Pinker, 1997)
DARPA ITO/MARS Project Update Vanderbilt University A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes,
1 Consciousness and Cognition Janusz A. Starzyk Cognitive Architectures.
Manipulating Attention in Computer Games Matthias Bernhard, Le Zhang, Michael Wimmer Institute of Computer Graphics and Algorithms Vienna University of.
CS 235: User Interface Design September 29 Class Meeting Department of Computer Science San Jose State University Fall 2014 Instructor: Ron Mak
The Perception of Walking Speed in a Virtual Environment By T. Banton, J. Stefanucci, F. Durgin, A. Fass, and D. Proffitt Presentation by Ben Cummings.
Eye movements: Lab # 1 - Catching a ball. How do we use our eyes to catch balls? What information does the brain need? Most experiments look at simple.
Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3
Describe 2 kinds of eye movements and their function. Describe the specialized gaze patterns found by Land in cricket. Describe your results in the ball-catching.
Karen Hookstadt, OTR Spalding Rehabilitation Hospital.
Using PET. We ’ ve seen how PET measures brain activity We ’ ve seen how PET measures brain activity How can we use it to measure the “ mind ” that works.
Subject wearing a VR helmet immersed in the virtual environment on the right, with obstacles and targets. Subjects follow the path, avoid the obstacels,
Visual Perception, Attention & Action. Anthony J Greene2.
Dana Ballard - University of Rochester1 Distributed Synchrony: a model for cortical communication Madhur Ambastha Jonathan Shaw Zuohua Zhang Dana H. Ballard.
Visuo-Motor Relationships: Plasticity and Development Read: Rosenbaum Chapters 2, 6 + may.doc.
DARPA ITO/MARS Project Update Vanderbilt University A Software Architecture and Tools for Autonomous Robots that Learn on Mission K. Kawamura, M. Wilkes,
The Brain!. The Brain Facts About the Brain Brain Lobes & Functions How the Brain is Studied How the Brain is Studied Resources Concept Map Author’s Slide.
KAMI KITT ASSISTIVE TECHNOLOGY Chapter 7 Human/ Assistive Technology Interface.
Seeing and Acting in a Virtual World PSY 341K Class hours: Tues, Thurs Room 4-242, SEAY Instructor: Professor Mary Hayhoe SEAY Room X
Summary of results. Reiterate goal of investigation: How general is anticipatory behavior observed by Land & McCleod? Found: Untrained subjects exhibit.
Learning to Navigate Through Crowded Environments Peter Henry 1, Christian Vollmer 2, Brian Ferris 1, Dieter Fox 1 Tuesday, May 4, University of.
The geometry of the system consisting of the hyperbolic mirror and the CCD camera is shown to the right. The points on the mirror surface can be expressed.
Jochen Triesch, UC San Diego, 1 Attention Outline: Overview bottom-up attention top-down attention physiology of attention.
Laurent Itti: CS599 – Computational Architectures in Biological Vision, USC Lecture 12: Visual Attention 1 Computational Architectures in Biological.
Spatio-temporal saliency model to predict eye movements in video free viewing Gipsa-lab, Grenoble Département Images et Signal CNRS, UMR 5216 S. Marat,
Direct visuomotor transformations for reaching (Buneo et al.) 협동과정 뇌과학 김은영.
Give examples of the way that virtual reality can be used in Psychology.
An Eyetracking Analysis of the Effect of Prior Comparison on Analogical Mapping Catherine A. Clement, Eastern Kentucky University Carrie Harris, Tara Weatherholt,
8. What are the advantages and disadvantages of using a virtual reality environment to study the brain and behavior? 9.Give examples of the way that virtual.
Control of Attention and Gaze in Natural Environments.
Optimal Eye Movement Strategies In Visual Search.
Adaptive Control of Gaze and Attention Mary Hayhoe University of Texas at Austin Jelena Jovancevic University of Rochester Brian Sullivan University of.
What is meant by “top-down” and “bottom-up” processing? Give examples of both. Bottom up processes are evoked by the visual stimulus. Top down processes.
Control of Attention and Gaze in Natural Environments.
Eye Movements, Attention, and Working Memory in Natural Tasks Mary Hayhoe Center for Perceptual Systems.
Describe how reaching and grasping abilities develop in the first year of life.
Chapter 15. Cognitive Adequacy in Brain- Like Intelligence in Brain-Like Intelligence, Sendhoff et al. Course: Robots Learning from Humans Cinarel, Ceyda.
Seeing and Acting in a Virtual World PSY 341K Class hours: Tues, Thurs Room 4-242, SEAY Instructor: Professor Mary Hayhoe SEAY Room X
What is meant by “top-down” and “bottom-up” processing? Give examples of both. Bottom up processes are evoked by the visual stimulus. Top down processes.
1 Computational Vision CSCI 363, Fall 2012 Lecture 29 Structure from motion, Heading.
What visual image information is needed for the things we do? How is vision used to acquire information from the world?
Lab 2 Issues: Needed to adapt to the “normal environment”. We would have liked to see more rapid adjustment and a stable baseline. Most subjects adapted.
Chapter 11: Artificial Intelligence
Control of Attention and Gaze in Natural Environments
Introducing the M-metric Maurice R. Masliah and Paul Milgram
CS201 Lecture 02 Computer Vision: Image Formation and Basic Techniques
Atom-Based Embedded System Design at CUHK
Enhanced-alignment Measure for Binary Foreground Map Evaluation
Volume 53, Issue 1, Pages 9-16 (January 2007)
Application to Animating a Digital Actor on Flat Terrain
Posterior parietal cortex
Presentation transcript:

Modeling the Brain’s Operating System Dana H. Ballard Computer Science Dept. University of Austin Texas, NY, USA International Symposium “Vision by Brains and Machines” November 13th-17th Montevideo, Uruguay

Embodied Cognition Maurice Merleau-Ponty World Body Brain

Timescales Round-trip through Cortical Memory Shortest Recognition time Modal fixation time Attention Switching Time Sentence generation Speed Chess minimum search Activity time 10 3 Memory encoding sec Continuous Discrete

Marr Brooks =

Behaviors compete for body’s motor resources Behaviors obtain sensory information Behaviors are scheduled from a pool Three Levels of a Human “Operating System” Behavior 1 2 3

Task: Make a PBJ sandwich Computational Abstraction Hierarchy Component: Remove Jelly Jar Lid Routine: Locate Lid

Multi-tasking As revealed by gaze sharing in human data Shinoda and Hayhoe, Vision Research 2001

Roelfsema et al PNAS 2003 Visual Routines

Introducing “Walter” Pickup cans Stay on sidewalk Avoid obstacles

Control of visuo-motor routines “active” “inactive” + only ~4 can run simultaneously ms update per behavior

Walter’s Visual Routines image Can locationsSidewalk location1-d obstacle locs

You are here state action Reinforcement Learning Primer : Before Learning

         policy value Reinforcement Learning Primer : After Learning

Microbehavior for Litter Cleanup 2b. Value of Policy Q ,d d  2a. Policy 1. Visual Routine Heading from Walter’s perspective

Learned Microbehaviors LitterSidewalkObstacles

Microbehaviors and the body’s resources “active” “inactive” Walking direction uses weighted average of Q values. Gaze direction must use a single best Q value.

The best Q given a sample state The expected Q given the state uncertainty Which Microbehavior should get the gaze vector? -

obs can side obs can side

Performance Comparison

Walter crosses the street Pickup cans Stay on sidewalk Avoid obstacles

Running Behaviors: Eye Movement Trace

Three trials

Eyetracker in V8 helmet

A curved path in real space Produces the perception of a straight path in visual space Human Ss walk Walter’s route in Virtual Reality. Their 6 dof head position and 2 dof gaze positions are continuously tracked. Three subjects were used. The resultant video and eye track signal are scored frame-by- frame. Methods

A human walks Walter’s route

Obstacle Litter Sidewalk Corner Crosswalk Otherside Human data: individual fixations

Walter(3 trials) Human subjects(3) Walter and the humans have similar task priorities

Human data: Two samples with different contexts Obstacle avoidance Litter Sidewalk Crosswalk Otherside Near obstacles Approaching crosswalk Walter

On Crosswalk Approaching crosswalk Walter and the human Ss all exhibit context sensitivities. Human gaze locations are interpreted based on gaze location. The actual internal state is unknown. Waiting for light Walter Human Ss Scheduling Context

Rewards can be changed quickly LitterSidewalkObstacles

Walter Humans Changing the reward schedule “ignore the litter”“ignore the obstacles”

Saliency Map vs Gaze courtesy of program provided by Dr. Laurent Itti at the iLab, USC Match No match

Credit Assignment - MIT model

Credit Assignment - Our Model

The laboratory at Rochester Computer ScienceCognitive Science Dana BallardMary Hayhoe Brian Sullivan Jelena Jovancevic Constantin Rothkopf Alumni Chen Yu Pili Aivar Nathan Sprague Jochen Triesch Al Robinson Neil Mennie Weilie Yi Jason Droll Xue Gu Jonathan Shaw