1 of 25 1 of 22 Blind-Spot Experiment Draw an image similar to that below on a piece of paper (the dot and cross are about 6 inches apart) Close your right.

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Lecture 11: Two-view geometry
Presented by Xinyu Chang
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Color Image Processing
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Recognition of Traffic Lights in Live Video Streams on Mobile Devices
Digital Image Processing: Digital Imaging Fundamentals.
A Study of Approaches for Object Recognition
1 Comp300a: Introduction to Computer Vision L. QUAN.
Introduction to Computer Vision 3D Vision Topic 9 Stereo Vision (I) CMPSCI 591A/691A CMPSCI 570/670.
Mobile Robotics: 6. Vision 1 Dr. Brian Mac Namee (
Object Recognition Using Distinctive Image Feature From Scale-Invariant Key point D. Lowe, IJCV 2004 Presenting – Anat Kaspi.
Scale Invariant Feature Transform (SIFT)
Mobile Robotics: 10. Kinematics 1
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
December 2, 2014Computer Vision Lecture 21: Image Understanding 1 Today’s topic is.. Image Understanding.
Mobile Robotics: 11. Kinematics 2
1 Interest Operators Find “interesting” pieces of the image Multiple possible uses –image matching stereo pairs tracking in videos creating panoramas –object.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
A Brief Overview of Computer Vision Jinxiang Chai.
Computer vision.
CS 376b Introduction to Computer Vision 04 / 29 / 2008 Instructor: Michael Eckmann.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
The Correspondence Problem and “Interest Point” Detection Václav Hlaváč Center for Machine Perception Czech Technical University Prague
Under Supervision of Dr. Kamel A. Arram Eng. Lamiaa Said Wed
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
1 Perception, Illusion and VR HNRS 299, Spring 2008 Lecture 8 Seeing Depth.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
Advanced Computer Graphics Shadow Techniques CO2409 Computer Graphics Week 20.
University of Kurdistan Digital Image Processing (DIP) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
` Tracking the Eyes using a Webcam Presented by: Kwesi Ackon Kwesi Ackon Supervisor: Mr. J. Connan.
Autonomous Robots Vision © Manfred Huber 2014.
Jack Pinches INFO410 & INFO350 S INFORMATION SCIENCE Computer Vision I.
Features, Feature descriptors, Matching Jana Kosecka George Mason University.
Computational Rephotography Soonmin Bae Aseem Agarwala Frédo Durand.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Perception and VR MONT 104S, Fall 2008 Lecture 8 Seeing Depth
Correspondence and Stereopsis Original notes by W. Correa. Figures from [Forsyth & Ponce] and [Trucco & Verri]
Suggested Machine Learning Class: – learning-supervised-learning--ud675
Image features and properties. Image content representation The simplest representation of an image pattern is to list image pixels, one after the other.
Digital Image Processing
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
1 Computational Vision CSCI 363, Fall 2012 Lecture 18 Stereopsis III.
Computational Vision CSCI 363, Fall 2012 Lecture 17 Stereopsis II
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
1 of 32 Computer Graphics Color. 2 of 32 Basics Of Color elements of color:
SIFT.
A Plane-Based Approach to Mondrian Stereo Matching
Visual homing using PCA-SIFT
Color Image Processing
Color Image Processing
Computational Vision CSCI 363, Fall 2016 Lecture 15 Stereopsis
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Color Image Processing
Feature description and matching
دکتر سعید شیری قیداری & فصل 4 کتاب
Common Classification Tasks
Color Image Processing
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
Digital Image Processing
Detecting image intensity changes
Course 6 Stereo.
Recognition and Matching based on local invariant features
Presentation transcript:

1 of 25 1 of 22 Blind-Spot Experiment Draw an image similar to that below on a piece of paper (the dot and cross are about 6 inches apart) Close your right eye and focus on the cross with your left eye Hold the image about 20 inches away from your face and move it slowly towards you The dot should disappear!

2 of 25 2 of 22

Mobile Robotics: 7. Vision 2 Dr. Brian Mac Namee (

4 of 25 4 of 22 Acknowledgments These notes are based (heavily) on those provided by the authors to accompany “Introduction to Autonomous Mobile Robots” by Roland Siegwart and Illah R. Nourbakhsh More information about the book is available at: The book can be bought at: The MIT Press and Amazon.com The MIT PressAmazon.com

5 of 25 5 of 22 Today’s Lecture Today we will have a quick tour of some of the uses of vision sensors in robotics today, including: –Colour models –Object recognition –Face recognition –Stereo vision –Object tracking We will give a brief overview of all of this as computer vision is far too massive a subject to cover in our course

6 of 25 6 of 22 RGB Colour Model Think of R, G, B as color orthobasis (0,1,0) – pure green (0,0,1) – pure blue (1,0,0) pure red (1,1,1) - white (0,0,0) - black (hidden)

7 of 25 7 of 22 HSV Colour Model More robust against illumination changes Still must confront noise, specularity etc.

8 of 25 8 of 22 Object Detection Suppose we want to detect an object (e.g., a colored ball) in the field of view We simply need to identify the pixels of some desired colour in the image…right? Image coordinates (pixels) u v IOIO width height

9 of 25 9 of 22 It’s Not That Easy! Occluded light source Specular highlights Mixed pixels Complex surface geometry (self- shadowing) Noise!

10 of of 22 Evolution Robotics’ ViPR System Evolution Robotics ViPR (visual pattern recognition) technology provides a reliable and robust vision solution to object recognition The technique is based on extracting salient features from an image Salient features are artefacts such as edges, corners etc The description of an object is a set of up to a thousand salient features, the textures of the pixels around them and their relationships to each other

11 of of 22 Evolution Robotics’ ViPR System (cont…) Object recognition then involves first finding all of the features in a new image These features are matched against those of all of the models in a database If many features in a new image are the same as those in a database model, the model is a good candidate match Further accuracy is obtained by comparing the relative positions of the matched features in the image and the model The model in the database with the best match score is then recognised

12 of of 22 ViPR Example

13 of of 22 ViPR Pros & Cons The advantages of the ViPR system include: –Invariance to rotation and affine transformation –Invariance to changes in scale –Invariance to lighting changes –Invariance to occlusions –Reliable recognition However, while the ViPR can be used to recognise symbols and 3D objects it cannot be used to recognise deformable 3D objects such as faces For more information on the technologies behind ViPR have a look at: “Core Technologies for Service Robotics”, N. Karlsson, M. E. Munich, L. Goncalves,Core Technologies for Service Robotics J. Ostrowski, E. Di Bernardo & P. Pirjanian “Distinctive Image Features from Scale-Invariant Keypoints”, D. LoweDistinctive Image Features from Scale-Invariant Keypoints

14 of of 22 ViPR Demo

15 of of 22 Vision Through Colour Tracking Often colour alone can be used to perform vision tasks We use flood fill techniques like those in Photoshop Particularly useful in controlled environments

16 of of 22 Face Tracking Using Colour Alone Image AcquisitionRGB to HSV ConversionSkin Colour Binary ImageImage ClosingSegmentationSelection by Size

17 of of 22 Stereo Vision Stereo vision is used to determine distance through differences in images taken from two cameras positioned slightly apart –Just like our eyes!

18 of of 22 Stereo Vision Idealized camera geometry for stereo vision

19 of of 22 Stereo Vision A point visible from both cameras produces a conjugate pair –Conjugate pairs lie on epipolar line (parallel to the x- axis for the arrangement in the figure above) From a conjugate pair distance can be estimated

20 of of 22 Stereo Vision Calculation of Depth –The key problem in stereo is now how do we solve the correspondence problem? Gray-Level Matching –Match gray-level wave forms on corresponding epipolar lines –“Brightness” = image irradiance I(x,y) –Zero Crossing of Laplacian of Gaussian is a widely used approach for identifying feature in the left and right image

21 of of 22 Stereo Vision Example Depth image (bright = close, dark = far) Confidence image (bright = high confidence) Vertical edge filtered left and right images Original left and right images

22 of of 22 Summary Today we looked at some of the vision techniques commonly used in robotics We have barely scratched the surface, but hopefully you have gained some appreciation of the difficulties involved

23 of of 22 Questions? ?