Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”

Slides:



Advertisements
Similar presentations
Computer Vision, Robert Pless
Advertisements

3D reconstruction.
Dynamic Occlusion Analysis in Optical Flow Fields
Lecture 8: Stereo.
Computer Vision Optical Flow
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Announcements Quiz Thursday Quiz Review Tomorrow: AV Williams 4424, 4pm. Practice Quiz handout.
Announcements Project1 artifact reminder counts towards your grade Demos this Thursday, 12-2:30 sign up! Extra office hours this week David (T 12-1, W/F.
CS485/685 Computer Vision Prof. George Bebis
CSc83029 – 3-D Computer Vision/ Ioannis Stamos 3-D Computational Vision CSc Optical Flow & Motion The Factorization Method.
Announcements Project 1 test the turn-in procedure this week (make sure your folder’s there) grading session next Thursday 2:30-5pm –10 minute slot to.
© 2003 by Davi GeigerComputer Vision October 2003 L1.1 Structure-from-EgoMotion (based on notes from David Jacobs, CS-Maryland) Determining the 3-D structure.
Optical Flow Estimation
Lecture 19: Optical flow CS6670: Computer Vision Noah Snavely
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Numerical Recipes (Newton-Raphson), 9.4 (first.
Image Motion.
Motion Field and Optical Flow. Outline Motion Field and Optical Flow Definition, Example, Relation Optical Flow Constraint Equation Assumptions & Derivation,
CS223b, Jana Kosecka Rigid Body Motion and Image Formation.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
3D Rigid/Nonrigid RegistrationRegistration 1)Known features, correspondences, transformation model – feature basedfeature based 2)Specific motion type,
Matching Compare region of image to region of image. –We talked about this for stereo. –Important for motion. Epipolar constraint unknown. But motion small.
CSCE 641 Computer Graphics: Image Registration Jinxiang Chai.
3D Computer Vision and Video Computing 3D Vision Topic 8 of Part 2 Visual Motion (II) CSC I6716 Spring 2004 Zhigang Zhu, NAC 8/203A
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #15.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
Multi-view geometry. Multi-view geometry problems Structure: Given projections of the same 3D point in two or more images, compute the 3D coordinates.
Lecture 11 Stereo Reconstruction I Lecture 11 Stereo Reconstruction I Mata kuliah: T Computer Vision Tahun: 2010.
College of Physics Science & Technology YANGZHOU UNIVERSITYCHINA Chapter 11ROTATION 11.1 The Motion of Rigid Bodies Rigid bodies A rigid body is.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
1 Preview At least two views are required to access the depth of a scene point and in turn to reconstruct scene structure Multiple views can be obtained.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
Shape from Stereo  Disparity between two images  Photogrammetry  Finding Corresponding Points Correlation based methods Feature based methods.
1-1 Measuring image motion velocity field “local” motion detectors only measure component of motion perpendicular to moving edge “aperture problem” 2D.
Lecture 04 22/11/2011 Shai Avidan הבהרה : החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
1 Computational Vision CSCI 363, Fall 2012 Lecture 20 Stereo, Motion.
Metrology 1.Perspective distortion. 2.Depth is lost.
Stereo Course web page: vision.cis.udel.edu/~cv April 11, 2003  Lecture 21.
Geometric Camera Models
Visual motion Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
RELATIVE MOTION ANALYSIS: VELOCITY
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #16.
Course 11 Optical Flow. 1. Concept Observe the scene by moving viewer Optical flow provides a clue to recover the motion. 2. Constraint equation.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
3D Imaging Motion.
Lecture 11 Adding Edge Element to Constraint Coarse-to-Fine Approach Optical Flow.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Miguel Tavares Coimbra
Copyright © 2012 Elsevier Inc. All rights reserved.. Chapter 19 Motion.
Multiple Light Source Optical Flow Multiple Light Source Optical Flow Robert J. Woodham ICCV’90.
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Computer vision: models, learning and inference M Ahad Multiple Cameras
Motion Estimation Today’s Readings Trucco & Verri, 8.3 – 8.4 (skip 8.3.3, read only top half of p. 199) Newton's method Wikpedia page
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
Optical flow and keypoint tracking Many slides adapted from S. Seitz, R. Szeliski, M. Pollefeys.
Dynamic Perspective How a moving camera reveals scene depth and egomotion parameters Jitendra Malik UC Berkeley.
1 Motion Estimation Readings: Ch 9: plus papers change detection optical flow analysis Lucas-Kanade method with pyramid structure Ming Ye’s improved.
Image Motion. The Information from Image Motion 3D motion between observer and scene + structure of the scene –Wallach O’Connell (1953): Kinetic depth.
Correspondence and Stereopsis. Introduction Disparity – Informally: difference between two pictures – Allows us to gain a strong sense of depth Stereopsis.
Estimating Parametric and Layered Motion
Lecture 3: Camera Rotations and Homographies
Motion Estimation Today’s Readings
CSSE463: Image Recognition Day 30
Announcements Questions on the project? New turn-in info online
Optical flow Computer Vision Spring 2019, Lecture 21
CSSE463: Image Recognition Day 30
Optical flow and keypoint tracking
Presentation transcript:

Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental” Matrices, which describe where a point in one image may appear in a second image, and how to solve the stereo matching problem.

Computer Vision, Robert Pless Projections one more time! The “normalized camera” helps to simplify geometric properties. Today, we consider what happens when points in the scene move. And a little bit of how to measure motion on an image.

Computer Vision, Robert Pless Simple form allows us to write: Two reasons points might move: 1) The point is moving (it’s a bird, it’s a plane…) 2) The world is static, and the camera is moving (but if we define the coordinate system to be at the camera, then all the points move.)

Computer Vision, Robert Pless Points moving? How can we possible represent that?!? And the instantaneous representation of motion is? So how does the instantaneous world motion turn into instantaneous image motion?

Computer Vision, Robert Pless So, how does the “apparent” speed (the image speed) of an object relate to: –Its speed –Its distance Which points look like they aren’t moving?

Computer Vision, Robert Pless That was for one point moving. Suppose we move the camera, *but continue to use it as the coordinate system*. Then all the points in the world move. How do they move, (instantaneously?). Instantaneous motion of the camera can be written as a translational velocity and an angular velocity. Commonly, this is written as a pair of vectors:

Computer Vision, Robert Pless New variables… X Y Z O

Computer Vision, Robert Pless Calculating a cross product, Substitute in, if x = X/Z, then X = xZ So, if the camera moves with t, , then the world points move as above. How do the pixels move?

Computer Vision, Robert Pless Pixel motion defines a vector field. (a) Motion field of a pilot looking straight ahead while approaching a fixed point on a landing strip. (b) Pilot is looking to the right in level flight. (a)(b)

Computer Vision, Robert Pless Examples of Vector Fields II (a)(b) (c)(d) (a) Translation perpendicular to a surface. (b) Rotation about axis perpendicular to image plane. (c) Translation parallel to a surface at a constant distance. (d) Translation parallel to an obstacle in front of a more distant background.

Computer Vision, Robert Pless Image Flow due to Rigid Motion Ambiguities?

Computer Vision, Robert Pless So, if we can measure u,v (the local motion of pixels on an image), Then we could solve for U,V,W, and the rotation. The vector u,v is called “optic flow” Solving for this vector field has long been considered a key first step in vision algorithms. A good gyroscopic stabilizer makes the rotation part zero!

Computer Vision, Robert Pless Grounding image. (preview the matlab joys…)

Computer Vision, Robert Pless Solving with small window, there is “The aperture problem” Normal flow: Component of flow perpendicular to line feature.

Computer Vision, Robert Pless But, but, but, the world isn’t just lines!?! Let’s go backwards. Suppose you know the optic flow, u,v, at a pixel (x,y) at frame t. The intensity of that point is, I(x,y,t). What assumptions can you use to guess how the intensity will change in the next frame? We can do a *first order taylor series expansion* of the intensity function at that image.

Computer Vision, Robert Pless Optic flow is 2d vector on image (u,v) Assuming: –intensity only changes due to the motion. –The derivitives are smooth Then we get a constraint: I x u + I y v + I t = 0 Defines line in velocity space Require additional constraint to define optic flow.

Computer Vision, Robert Pless Solving the aperture problem How to get more equations for a pixel? –Basic idea: impose additional constraints most common is to assume that the flow field is smooth locally one method: pretend the pixel’s neighbors have the same (u,v) –If we use a 5x5 window, that gives us 25 equations per pixel!