Presentation is loading. Please wait.

Presentation is loading. Please wait.

Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3

Similar presentations


Presentation on theme: "Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3"— Presentation transcript:

1 Cognitive Computer Vision Kingsley Sage khs20@sussex.ac.uk and Hilary Buxton hilaryb@sussex.ac.uk Prepared under ECVision Specific Action 8-3 http://www.ecvision.org

2 Lecture 15 Active Vision & cameras Research issues

3 Active vision During recent years, there has been a growing interest in the use of active control of “image formation” to simplify and accelerate scene understanding Examples of “image formation” include, for example: – gaze or focus of attention (saccadic control) – stereo viewing geometry (vergence control) – a head mounted camera

4 Active vision Historical roots of “Active Computer Vision” – 1982: Term first used by Bajcsy (Nato workshop) – 1987: Paper by Aloimonos et al (ICCV) – 1989: Entire session at ICCV References: – “Active Percpetion”, R. Bajcsy, IEEE Proceedings Vol 76, No 8, pp 996-1006, August 1988 – “Active Vision”, J. Y. Aloimonos, I. Weiss and A. Bandopadhay, ICCV, pp 333-356, 1987

5 Active vision To reconstruct or not reconstruct? “Classical” stereo correspondence reconstructs a scene in reference frame based on stereo geometry Active vision changes vergence angles, focus etc. making reconstruction by traditional means intractable Active systems avoid reconstruction wherever possible Many visual control tasks such as driving a car or grasping an object can be performed by servoing directly from measurements made in the image: – “A New Approach to Visual Servoing in Robotics”, B. Espiau, F. Chaumette and P. Rives, IEEE Trans. on Robotics and Automation 8(3), June 1992

6 Active vision Application areas Task based visual control – Example in the ActIPret project – Need to get reference for second video!! Navigation Telepresence Wearable computing Panoramic cameras Saccadic control

7 Task based visual control The ActIPret project In ActIPret, information about the current task (what objects are we likely to be interacting with, what types of behaviour) are used to determine in real-time an optimum viewing geometry (gaze vector, focus, zoom)

8 Task based visual control Source unknown (for now) The vision system is using an appearance based model to determine how and when it is appropriate to pickup up the part

9 Active vision in navigation Example: GTI Project http://www.robots.ox.ac.uk/~lav/Research/GTI/section1.html One approach to visual navigation in cluttered environments is to recover the boundaries of free space, and then move conservatively along the middle of it. Humans tend to prefer to cut corners by "swinging" from protruding corner to protruding corner. Using a stereo head to recover range to a fixated point, can take the vehicle into "orbit" around the fixated point at a chosen safe radius |R| of clearance. (The sense of rotation can by chosen by using R>0 or R<0.)

10 Telepresence Example: VFR Project http://www.robots.ox.ac.uk/~lav/Research/VFR/section1.html Telepresence can be defined as the process of sensing sufficient information about the operator and task environment, and communicating this information in a sufficiently natural way to the human operator, that the operator feels physically present at the remote site. The top movie shows an early version of a tracker using infra-red light to control 2 degrees of freedom of the head at 50Hz. The bottom movie shows a more sophisticated version controlling the head at the end of a robot arm.

11 Wearable computing Example: DyPERS from MIT

12 Panoramic vision 360° images usually achieved using a 2D imaging array looking into a rotating mirror or hemi- spherical reflector Rotating mirror approach allows variable resolution at different angular ranges Lots of good web links at: http://www.cis.upenn.edu/~kostas/omni.html http://www.cis.upenn.edu/~kostas/omni.html

13 Panoramic vision Panorama pictures taken from: http://cmp.felk.cvut.cz/demos/Omnivis/Photos/omniphotos.html http://cmp.felk.cvut.cz/demos/Omnivis/Photos/omniphotos.html

14 Panoramic vision application Homing robot (ICS, Greece) http://www.ics.forth.gr/~argyros/research/pan_homing.htm Perceptual processes are addressed in the context of goals, environment and behaviour of a system A novel, vision-based method for robot homing, the problem of computing a route so that a robot can return to its initial “home” position after the execution of an arbitrary “prior” path. Robot tracks visual features in panoramic views of the environment that it acquires as it moves.

15 Panoramic vision application Homing robot (ICS, Greece) http://www.ics.forth.gr/~argyros/research/pan_homing.htm When homing is initiated, the robot selects Milestone Positions (MPs) on the “prior” path by exploiting information in its visual memory. The MP selection process aims at picking positions that guarantee the success of the local control strategy between two consecutive MPs. See website for panoramic view

16 Saccadic control Attention – recognition loop (KTH, Sweden) http://www.nada.kth.se/~celle/ Scene is observed using a stereo head Disparity between two images can be used in localise objects in a 3D plane User saccades to an object, localised object is then recognised Attention – recognition loop

17 Robots that interact with humans SONY QRIO robot

18 The end Please feed back comments to Kingsley Sage or Hilary Buxton at the University of Sussex, UK


Download ppt "Cognitive Computer Vision Kingsley Sage and Hilary Buxton Prepared under ECVision Specific Action 8-3"

Similar presentations


Ads by Google