Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 of 25 1 of 22 Blind-Spot Experiment Draw an image similar to that below on a piece of paper (the dot and cross are about 6 inches apart) Close your right.

Similar presentations


Presentation on theme: "1 of 25 1 of 22 Blind-Spot Experiment Draw an image similar to that below on a piece of paper (the dot and cross are about 6 inches apart) Close your right."— Presentation transcript:

1 1 of 25 1 of 22 Blind-Spot Experiment Draw an image similar to that below on a piece of paper (the dot and cross are about 6 inches apart) Close your right eye and focus on the cross with your left eye Hold the image about 20 inches away from your face and move it slowly towards you The dot should disappear!

2 2 of 25 2 of 22

3 Mobile Robotics: 7. Vision 2 Dr. Brian Mac Namee (www.comp.dit.ie/bmacnamee)www.comp.dit.ie/bmacnamee

4 4 of 25 4 of 22 Acknowledgments These notes are based (heavily) on those provided by the authors to accompany “Introduction to Autonomous Mobile Robots” by Roland Siegwart and Illah R. Nourbakhsh More information about the book is available at: http://autonomousmobilerobots.epfl.ch/ http://autonomousmobilerobots.epfl.ch/ The book can be bought at: The MIT Press and Amazon.com The MIT PressAmazon.com

5 5 of 25 5 of 22 Today’s Lecture Today we will have a quick tour of some of the uses of vision sensors in robotics today, including: –Colour models –Object recognition –Face recognition –Stereo vision –Object tracking We will give a brief overview of all of this as computer vision is far too massive a subject to cover in our course

6 6 of 25 6 of 22 RGB Colour Model Think of R, G, B as color orthobasis (0,1,0) – pure green (0,0,1) – pure blue (1,0,0) pure red (1,1,1) - white (0,0,0) - black (hidden)

7 7 of 25 7 of 22 HSV Colour Model More robust against illumination changes Still must confront noise, specularity etc.

8 8 of 25 8 of 22 Object Detection Suppose we want to detect an object (e.g., a colored ball) in the field of view We simply need to identify the pixels of some desired colour in the image…right? Image coordinates (pixels) u v IOIO width height

9 9 of 25 9 of 22 It’s Not That Easy! Occluded light source Specular highlights Mixed pixels Complex surface geometry (self- shadowing) Noise!

10 10 of 25 10 of 22 Evolution Robotics’ ViPR System Evolution Robotics ViPR (visual pattern recognition) technology provides a reliable and robust vision solution to object recognition The technique is based on extracting salient features from an image Salient features are artefacts such as edges, corners etc The description of an object is a set of up to a thousand salient features, the textures of the pixels around them and their relationships to each other

11 11 of 25 11 of 22 Evolution Robotics’ ViPR System (cont…) Object recognition then involves first finding all of the features in a new image These features are matched against those of all of the models in a database If many features in a new image are the same as those in a database model, the model is a good candidate match Further accuracy is obtained by comparing the relative positions of the matched features in the image and the model The model in the database with the best match score is then recognised

12 12 of 25 12 of 22 ViPR Example

13 13 of 25 13 of 22 ViPR Pros & Cons The advantages of the ViPR system include: –Invariance to rotation and affine transformation –Invariance to changes in scale –Invariance to lighting changes –Invariance to occlusions –Reliable recognition However, while the ViPR can be used to recognise symbols and 3D objects it cannot be used to recognise deformable 3D objects such as faces For more information on the technologies behind ViPR have a look at: “Core Technologies for Service Robotics”, N. Karlsson, M. E. Munich, L. Goncalves,Core Technologies for Service Robotics J. Ostrowski, E. Di Bernardo & P. Pirjanian “Distinctive Image Features from Scale-Invariant Keypoints”, D. LoweDistinctive Image Features from Scale-Invariant Keypoints

14 14 of 25 14 of 22 ViPR Demo

15 15 of 25 15 of 22 Vision Through Colour Tracking Often colour alone can be used to perform vision tasks We use flood fill techniques like those in Photoshop Particularly useful in controlled environments

16 16 of 25 16 of 22 Face Tracking Using Colour Alone Image AcquisitionRGB to HSV ConversionSkin Colour Binary ImageImage ClosingSegmentationSelection by Size

17 17 of 25 17 of 22 Stereo Vision Stereo vision is used to determine distance through differences in images taken from two cameras positioned slightly apart –Just like our eyes!

18 18 of 25 18 of 22 Stereo Vision Idealized camera geometry for stereo vision

19 19 of 25 19 of 22 Stereo Vision A point visible from both cameras produces a conjugate pair –Conjugate pairs lie on epipolar line (parallel to the x- axis for the arrangement in the figure above) From a conjugate pair distance can be estimated

20 20 of 25 20 of 22 Stereo Vision Calculation of Depth –The key problem in stereo is now how do we solve the correspondence problem? Gray-Level Matching –Match gray-level wave forms on corresponding epipolar lines –“Brightness” = image irradiance I(x,y) –Zero Crossing of Laplacian of Gaussian is a widely used approach for identifying feature in the left and right image

21 21 of 25 21 of 22 Stereo Vision Example Depth image (bright = close, dark = far) Confidence image (bright = high confidence) Vertical edge filtered left and right images Original left and right images

22 22 of 25 22 of 22 Summary Today we looked at some of the vision techniques commonly used in robotics We have barely scratched the surface, but hopefully you have gained some appreciation of the difficulties involved

23 23 of 25 23 of 22 Questions? ?


Download ppt "1 of 25 1 of 22 Blind-Spot Experiment Draw an image similar to that below on a piece of paper (the dot and cross are about 6 inches apart) Close your right."

Similar presentations


Ads by Google