Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong.

Slides:



Advertisements
Similar presentations
Miroslav Hlaváč Martin Kozák Fish position determination in 3D space by stereo vision.
Advertisements

Computer Vision, Robert Pless
CS 691 Computational Photography Instructor: Gianfranco Doretto 3D to 2D Projections.
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
Gratuitous Picture US Naval Artillery Rangefinder from World War I (1918)!!
Computer vision: models, learning and inference
Stereo.
Camera calibration and epipolar geometry
Geometry of Images Pinhole camera, projection A taste of projective geometry Two view geometry:  Homography  Epipolar geometry, the essential matrix.
Announcements. Projection Today’s Readings Nalwa 2.1.
Lecture 5: Projection CS6670: Computer Vision Noah Snavely.
1 Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1.
3D Measurements by PIV  PIV is 2D measurement 2 velocity components: out-of-plane velocity is lost; 2D plane: unable to get velocity in a 3D volume. 
Announcements Mailing list Project 1 test the turnin procedure *this week* (make sure it works) vote on best artifacts in next week’s class Project 2 groups.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Lecture 12: Projection CS4670: Computer Vision Noah Snavely “The School of Athens,” Raphael.
Multiple View Geometry Marc Pollefeys University of North Carolina at Chapel Hill Modified by Philippos Mordohai.
The Pinhole Camera Model
© 2004 by Davi GeigerComputer Vision March 2004 L1.1 Binocular Stereo Left Image Right Image.
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
CSE473/573 – Stereo Correspondence
3-D Computer Vision Using Structured Light Prepared by Burak Borhan.
The Camera : Computational Photography Alexei Efros, CMU, Fall 2008.
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
Cameras, lenses, and calibration
1 Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation.
3-D Scene u u’u’ Study the mathematical relations between corresponding image points. “Corresponding” means originated from the same 3D point. Objective.
Computer Vision Spring ,-685 Instructor: S. Narasimhan WH 5409 T-R 10:30am – 11:50am Lecture #15.
Automatic Camera Calibration
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Geometric and Radiometric Camera Calibration Shape From Stereo requires geometric knowledge of: –Cameras’ extrinsic parameters, i.e. the geometric relationship.
Structure from images. Calibration Review: Pinhole Camera.
Image Formation Fundamentals Basic Concepts (Continued…)
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Geometric Models & Camera Calibration
Recap from Monday Image Warping – Coordinate transforms – Linear transforms expressed in matrix form – Inverse transforms useful when synthesizing images.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: , Perspective Geometry Camera Model Stereo Triangulation 3D Reconstruction by.
CSCE 643 Computer Vision: Structure from Motion
1 Finding depth. 2 Overview Depth from stereo Depth from structured light Depth from focus / defocus Laser rangefinders.
Vision Review: Image Formation Course web page: September 10, 2002.
Lecture 03 15/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Stereo Viewing Mel Slater Virtual Environments
Single-view geometry Odilon Redon, Cyclops, 1914.
CSE 185 Introduction to Computer Vision Stereo. Taken at the same time or sequential in time stereo vision structure from motion optical flow Multiple.
3D Sensing Camera Model Camera Calibration
Bahadir K. Gunturk1 Phase Correlation Bahadir K. Gunturk2 Phase Correlation Take cross correlation Take inverse Fourier transform  Location of the impulse.
Development of a laser slit system in LabView
stereo Outline : Remind class of 3d geometry Introduction
October 19, STEREO IMAGING (continued). October 19, RIGHT IMAGE PLANE LEFT IMAGE PLANE RIGHT FOCAL POINT LEFT FOCAL POINT BASELINE d FOCAL.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Intelligent Vision Systems Image Geometry and Acquisition ENT 496 Ms. HEMA C.R. Lecture 2.
3D Reconstruction Using Image Sequence
3D Sensing 3D Shape from X Perspective Geometry Camera Model Camera Calibration General Stereo Triangulation 3D Reconstruction.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
Example: warping triangles Given two triangles: ABC and A’B’C’ in 2D (12 numbers) Need to find transform T to transfer all pixels from one to the other.
3D Computer Vision 3D Vision Lecture 3 - Part 1 Camera Models Spring 2006 Zhang Aiwu.
Lecture 18: Cameras CS4670 / 5670: Computer Vision KavitaBala Source: S. Lazebnik.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
CSE 185 Introduction to Computer Vision
Computer vision: geometric models Md. Atiqur Rahman Ahad Based on: Computer vision: models, learning and inference. ©2011 Simon J.D. Prince.
Processing visual information for Computer Vision
Rendering Pipeline Fall, 2015.
Common Classification Tasks
Lecture 13: Cameras and geometry
Multiple View Geometry for Robotics
Announcements Midterm out today Project 1 demos.
Course 6 Stereo.
3D Sensing and Reconstruction Readings: Ch 12: , Ch 13: 13
Presentation transcript:

Cmput412 3D vision and sensing 3D modeling from images can be complex 90 horizon 3D measurements from images can be wrong

Previous lectures: 2D machine vision and image processing So far 2D vision for measurements on a 2D world plane: Usually overhead camera pointing straight down on a work table Adjust cam position so pixel [u,v] = s[X,Y]. s = scalefactor (pix/mm) Pixel coordinates are scaled world coord XY uv Camera Robot scene Thresholded image

A camera projects the 3D world to 2D images in a complex way 3D points project by rays of light that cross the camera's center of projection

A camera projects the 3D world to 2D images in a complex way Shigeo Fukuda

A camera projects the 3D world to 2D images in a complex way Shigeo Fukuda

The structure of ambient light

We focus on Camera Geometry center of projection focal length image plane image object Each point is projected along a ray through the center of projection. pinhole camera 3d 2d transformation: perspective projection

Perspective Projection center of projection focal length: f image plane image object image Add coordinate systems in order to describe feature points... We can save conscious mental gyrations by placing the image plane in front of the center.

Coordinate Systems canonical axes f pixel coordinates optical axis z x y u v object coordinates v (row) u (col) principal point Z x y at the C.O.P. X Y Z X = Scene structure x x y Image structure = (X,Y,Z) (x,y) how are these related?

Points between 2d and 3d X Y Z HZ X = Scene structure x X x x y Image structure == X Y f Z nonlinear! But this is only an ideal approximation...

Points between 2d and 3d X Y Z HZ X = Scene structure x X x x y Image structure == X Y f Z nonlinear! But this is only an ideal approximation...

It could be worse... (and often is!) Real cameras don’t create exactly a pinhole projection... But good cameras come close! Focus Lens distorsion: Lines -> curves

CAMERA INTERNAL CALIBRATION Known distance d known regular offset r A simple way to get scale parameters; we can compute the optical center as the numerical center and therefore have the intrinsic parameters Compute Sx Focal length = 1/ Sx

Stereo Vision GOAL: Passive 2- camera system for triangulating 3D position of points in space to generate a depth map of a world scene.GOAL: Passive 2- camera system for triangulating 3D position of points in space to generate a depth map of a world scene. Humans use stereo vision to obtain depthHumans use stereo vision to obtain depth

Stereo depth calculation: Simple case, aligned cameras Z X (0,0) (d,0) f XL XR Z = (f/XL) X Z= (f/XR) (X-d) (f/XL) X = (f/XR) (X-d) X = (XL d) / (XL - XR) Z = d*f (XL - XR) DISPARITY= (XL - XR) Similar triangles: Solve for X: Solve for Z:

Your basic stereo algorithm For each epipolar line For each pixel in the left image compare with every pixel on same epipolar line in right image pick pixel with minimum match cost Improvement: match windows This should look familar...

Commercial Stereo Systems PointGrey BumblebeePointGrey Bumblebee –Two cameras –Software Real-timeReal-time “Infinite” range“Infinite” range

SVM Stereo Head Mounted on an All- terrain Robot Stereo CameraStereo Camera –Vider Desing – RobotRobot –Shrimp, EPFL Application of Stereo VisionApplication of Stereo Vision –Traversability calculation based on stereo images for outdoor navigation –Motion tracking

For not aligned cameras: Match along epipolar lines Special case: parallel cameras – epipolar lines are parallel and aligned with rows

Structured light method Replace one camera with a light strip projectorReplace one camera with a light strip projector Calculate the shape by how the strip is distorted.Calculate the shape by how the strip is distorted. Same depth disparity equation as for 2 camerasSame depth disparity equation as for 2 cameras Machine vision setup

Real time Virtual 3D Scanner - Structured Light Technology DemoDemo

Kinect Another structure light methodAnother structure light method Use dost rather than stripsUse dost rather than strips

Kinect Hardware

See the IR-dots emitted by KINECT

Time of flight laser method Send the IR-laser light to different directions and sense how each beam is delayed.Send the IR-laser light to different directions and sense how each beam is delayed. Use the delay to calculate the distance of the object pointUse the delay to calculate the distance of the object point

Time of flight laser camera

LIDAR light detection and ranging scanner LeicaLeica terrestrial lidar (light detection and ranging) scanner lidar

3D Laser Scanning - Underground Mine Mapping DemoDemo

Motion capture for film production (MOCAP) IR light emitter and camera

3D body scanner

3-D Face capture

Dimensional Imaging 4D Video Face Capture with Textures Dimensional Imaging 4D Video Face Capture with Textures

Vision and range sensing The past:The past: –Mobile robots used ring of ultrasound or IR dist5ance sensors for obstacle avoidance or crude navigation –Robot arms used VGA cameras to track a few points Now:Now: –Full camera pose tracking and 3D scene reconstruction possible with inexpensive cameras and processing (e.g. PTAM on RaspberryPI+cam = $50 and 20gram –Can track hundreds of interest points for image based visual control. The next yearsThe next years –Active range sensing RGBD with Kinect growing in popularity indoors –Passive camera vision still important. Especially outdoors and on UAV.