Quick Overview of Robotics and Computer Vision. Computer Vision Agent Environment camera Light ?

Slides:



Advertisements
Similar presentations
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
Advertisements

CSE473/573 – Stereo and Multiple View Geometry
Introduction to Robotics Lecture One Robotics Club -Arjun Bhasin.
Street Crossing Tracking from a moving platform Need to look left and right to find a safe time to cross Need to look ahead to drive to other side of road.
University of Bridgeport
Introduction To Tracking
Intelligent Systems Lab. Extrinsic Self Calibration of a Camera and a 3D Laser Range Finder from Natural Scenes Davide Scaramuzza, Ahad Harati, and Roland.
Uncertainty Representation. Gaussian Distribution variance Standard deviation.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
1Notes  Handing assignment 0 back (at the front of the room)  Read the newsgroup!  Planning to put 16mm films on the web soon (possibly tomorrow)
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
HMM-BASED PATTERN DETECTION. Outline  Markov Process  Hidden Markov Models Elements Basic Problems Evaluation Optimization Training Implementation 2-D.
Vision Computing An Introduction. Visual Perception Sight is our most impressive sense. It gives us, without conscious effort, detailed information about.
Robust Lane Detection and Tracking
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Grading for ELE 5450 Assignment 28% Short test 12% Project 60%
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Information that lets you recognise a region.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
Biologically Inspired Turn Control for Autonomous Mobile Robots Xavier Perez-Sala, Cecilio Angulo, Sergio Escalera.
A Brief Overview of Computer Vision Jinxiang Chai.
February 21, 2000Robotics 1 Copyright Martin P. Aalund, Ph.D. Computational Considerations.
Computer Vision Group Prof. Daniel Cremers Autonomous Navigation for Flying Robots Lecture 3.2: Sensors Jürgen Sturm Technische Universität München.
Lecture 2: Introduction to Concepts in Robotics
Chapter 2 Robot Kinematics: Position Analysis
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
CSCE 643 Computer Vision: Structure from Motion
ECE 8443 – Pattern Recognition EE 3512 – Signals: Continuous and Discrete Objectives: Spectrograms Revisited Feature Extraction Filter Bank Analysis EEG.
COMP322/S2000/L261 Geometric and Physical Models of Objects Geometric Models l defined as the spatial information (i.e. dimension, volume, shape) of objects.
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
Spring 2015 CSc 83020: 3D Photography Prof. Ioannis Stamos Mondays 4:15 – 6:15
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Chapter 5 Multi-Cue 3D Model- Based Object Tracking Geoffrey Taylor Lindsay Kleeman Intelligent Robotics Research Centre (IRRC) Department of Electrical.
Kinematics. The function of a robot is to manipulate objects in its workspace. To manipulate objects means to cause them to move in a desired way (as.
Just a quick reminder with another example
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Autonomous Robots Vision © Manfred Huber 2014.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Chapter 3 Differential Motions and Velocities
INTRODUCTION TO DYNAMICS ANALYSIS OF ROBOTS (Part 4)
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Robotics/Machine Vision Robert Love, Venkat Jayaraman July 17, 2008 SSTP Seminar – Lecture 7.
Line Matching Jonghee Park GIST CV-Lab..  Lines –Fundamental feature in many computer vision fields 3D reconstruction, SLAM, motion estimation –Useful.
Determining 3D Structure and Motion of Man-made Objects from Corners.
1cs426-winter-2008 Notes. 2 Kinematics  The study of how things move  Usually boils down to describing the motion of articulated rigid figures Things.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Basilio Bona DAUIN – Politecnico di Torino
1 2D TO 3D IMAGE AND VIDEO CONVERSION. INTRODUCTION The goal is to take already existing 2D content, and artificially produce the left and right views.
Processing visual information for Computer Vision
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
A Forest of Sensors: Using adaptive tracking to classify and monitor activities in a site Eric Grimson AI Lab, Massachusetts Institute of Technology
Estimating the Kinematics of Unseen Joints that Affect the Vision System Justin Hart, Brian Scassellati, Steven Zucker Department of Computer Science.
Common Classification Tasks
Range Imaging Through Triangulation
Eric Grimson, Chris Stauffer,
--- Stereoscopic Vision and Range Finders
--- Stereoscopic Vision and Range Finders
CSE4421/5324: Introduction to Robotics
CHAPTER 2 FORWARD KINEMATIC 1.
Multiple View Geometry for Robotics
Vision based automated steering
Course 6 Stereo.
Principle of Bayesian Robot Localization.
Computer Graphics Lecture 15.
Presentation transcript:

Quick Overview of Robotics and Computer Vision

Computer Vision Agent Environment camera Light ?

Computer Vision Applications Light is probably the most valuable information that humans sense about their environment, and CV has a ton of applications: 1.Object classification 2.3D Reconstruction 3.Motion Analysis

Bag-of-visual-words processing: A common CV pipeline for object classification Converts an image into a vector, much like the bag-of-words model for text. After that, you can apply standard ML techniques to build a classifier.

Another common pipeline: Feature detection + correspondences 1.Find “features” in an image. These are typically things like corners, edges, or just very distinctive image patches.

Another common pipeline: Feature detection + correspondences 1.Find “features” in an image. 2.Find correspondences between features in similar images. Blue line shows a correspondence between features in stereo images.

Another common pipeline: Feature detection + correspondences 1.Find “features” in an image. 2.Find correspondences between features in similar images. 3.Recover depth

Improved Stereo Vision Using two (or more) cameras to recover the “depth” of pixels in a scene is called Stereo Vision or Stereo Reconstruction. Some techniques to improve it: -Structured light (like the Kinect) -Other sensors (eg, laser range finders) feature=player_embedded

Robotics: Putting it all together Agent Environment cameras Light, Laser, GPS  Representation of beliefs and goals  Estimation of state based on previous state and sensor information  Planning of actions to reach goals  Learning from previous examples to improve over time. motors Motion

Robotics Quiz For each of the following problems, which technique from our class would you use to solve it? 1.Determine the position of the robot, based on the sensor reading at the current time step, and the belief about the position at the previous time step.

Robotics Quiz For each of the following problems, which technique from our class would you use to solve it? 2.Given a road map, a belief about the starting position, and actions for driving forward and turning, determine a sequence of actions to get you to a desired point on the map.

Robotics Quiz For each of the following problems, which technique from our class would you use to solve it? 3.Given a camera feed aimed at an object on a table, come up with a sequence of robotic arm/finger motions that will pick up the object.

Robotics Quiz For each of the following problems, which technique from our class would you use to solve it? 4.Given a set of examples of robotic car maneuvers in a fixed area (several city blocks, say), some that successfully got to their goal, and some that didn’t (crashed or failed to reach the goal), determine the maneuver for each location in the area.

Additional Robotics Questions Representation: How complicated is it (how many variables/dimensions do you need) to represent the position and orientation of a robot car on a map? How many variables/dimensions do you need to represent the position, orientation, change in position, and change in orientation of a robot car on a map? The position and orientation are called the kinematic state of a robot. The position, orientation, change in position, and change in orientation are called the dynamic state of a robot.

Answer: Additional Robotics Questions Representation: How complicated is it (how many variables/dimensions do you need) to represent the position and orientation of a robot car on a map? How many variables/dimensions do you need to represent the position, orientation, change in position, and change in orientation of a robot car on a map? For a 2D world, the kinematic state requires 3 variables (dimensions): 1 for position on the x axis, 1 for position on the y axis, and 1 for the compass direction (or angle away from North). For a 2D world, the dynamic state requires 5 variables (dimensions): 3 for the kinematic state, 1 for forward velocity, and 1 for “turning velocity”, or yaw.

Additional Robotics Questions 3D Representation: How many variables/dimensions are needed for the kinematic and dynamic states of a robot helicoptor? Many-D Representation: How many variables/dimensions do you need for a robotic arm that has six rotation joints?

Answer: Additional Robotics Questions 3D Representation: How many variables/dimensions are needed for the kinematic and dynamic states of a robot helicoptor? 6 for kinematic, 12 for dynamic. Many-D Representation: How many variables/dimensions do you need for a robotic arm that has six rotation joints? The answer really depends on the orientation of the joints with respect to one another. But if each joint provides independent motion from the previous joints, you would need 6 variables for kinematic state (angle of each joint), plus 6 additional variables for dynamic state (change in the angle of each joint).

Some robotic videos $3000 DARPA robot hand (very cheap for a robot hand): mbedded&v=NvhCk6BvLBE Weird robot: