AIBO Camera Stabilization Tom Stepleton Ethan Tira-Thompson 16-720, Fall 2003.

Slides:



Advertisements
Similar presentations
3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
Advertisements

Announcements. Structure-from-Motion Determining the 3-D structure of the world, and/or the motion of a camera using a sequence of images taken by a moving.
Computer Vision, Robert Pless
The fundamental matrix F
Lecture 8: Stereo.
The Tripod In Digital Photographic & Video Applications.
Computer Vision Optical Flow
X From Video - Seminar By Randa Khayr Eli Shechtman, Yaron Caspi & Michal Irani.
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Announcements Project 2 Out today Sign up for a panorama kit ASAP! –best slots (weekend) go quickly...
Probabilistic video stabilization using Kalman filtering and mosaicking.
Epipolar geometry. (i)Correspondence geometry: Given an image point x in the first view, how does this constrain the position of the corresponding point.
Feature matching and tracking Class 5 Read Section 4.1 of course notes Read Shi and Tomasi’s paper on.
Computing motion between images
Dewarped Minds United. Progress Report from Bozeman: Simulator Estimating eye motion Dewarping Montaging Mosaicing.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Computer Vision : CISC4/689
Tracking using the Kalman Filter. Point Tracking Estimate the location of a given point along a sequence of images. (x 0,y 0 ) (x n,y n )
Tracking Gökhan Tekkaya Gürkan Vural Can Eroğul. Outline Tracking –Overview –Head Tracking –Eye Tracking –Finger/Hand Tracking Demos.
Single Point of Contact Manipulation of Unknown Objects Stuart Anderson Advisor: Reid Simmons School of Computer Science Carnegie Mellon University.
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2005 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2006 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
3D Computer Vision and Video Computing 3D Vision Topic 8 of Part 2 Visual Motion (II) CSC I6716 Spring 2004 Zhigang Zhu, NAC 8/203A
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
Stereo Ranging with verging Cameras Based on the paper by E. Krotkov, K.Henriksen and R. Kories.
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Computer vision: models, learning and inference
Image Stitching Ali Farhadi CSE 455
Yuping Lin and Gérard Medioni.  Introduction  Method  Register UAV streams to a global reference image ▪ Consecutive UAV image registration ▪ UAV to.
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
WP3 - 3D reprojection Goal: reproject 2D ball positions from both cameras into 3D space Inputs: – 2D ball positions estimated by WP2 – 2D table positions.
Epipolar geometry The fundamental matrix and the tensor
Homogeneous Coordinates (Projective Space) Let be a point in Euclidean space Change to homogeneous coordinates: Defined up to scale: Can go back to non-homogeneous.
Course 12 Calibration. 1.Introduction In theoretic discussions, we have assumed: Camera is located at the origin of coordinate system of scene.
IMAGE MOSAICING Summer School on Document Image Processing
Imaging Geometry for the Pinhole Camera Outline: Motivation |The pinhole camera.
Mosaics Today’s Readings Szeliski, Ch 5.1, 8.1 StreetView.
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
Example: line fitting. n=2 Model fitting Measure distances.
CSCE 643 Computer Vision: Structure from Motion
Geometric Camera Models
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
CS4670 / 5670: Computer Vision KavitaBala Lecture 17: Panoramas.
Computer Vision : CISC 4/689 Going Back a little Cameras.ppt.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
3D Imaging Motion.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
Feature Matching. Feature Space Outlier Rejection.
Mosaics part 3 CSE 455, Winter 2010 February 12, 2010.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Lecture 9 Feature Extraction and Motion Estimation Slides by: Michael Black Clark F. Olson Jean Ponce.
Digital Image Processing Additional Material : Imaging Geometry 11 September 2006 Digital Image Processing Additional Material : Imaging Geometry 11 September.
1 Computational Vision CSCI 363, Fall 2012 Lecture 29 Structure from motion, Heading.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Calibration ECE 847: Digital Image Processing Stan Birchfield Clemson University.
CS4670 / 5670: Computer Vision Kavita Bala Lecture 20: Panoramas.
Paper – Stephen Se, David Lowe, Jim Little
COSC579: Image Align, Mosaic, Stitch
Homogeneous Coordinates (Projective Space)
RANSAC and mosaic wrap-up
Epipolar geometry.
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
COMPUTER VISION Tam Lam
Lecture 3: Camera Rotations and Homographies
Idea: projecting images onto a common plane
More Mosaic Madness : Computational Photography
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

AIBO Camera Stabilization Tom Stepleton Ethan Tira-Thompson , Fall 2003

AIBO vision is bumpy Legged locomotion induces vibration. Head (camera mount) is a great big cantilevered mass.

Camera problems: the lineup Obvious problems due to exposure time/cheap optics: Lens distortionSmear

Camera problems: the lineup Subtle problems due to sample rate: Skew/bendingStretching

Camera problems: the lineup Subtle problems due to sample rate: Skew/bendingStretching

Goal: Take AIBO from this…

…to this: Smooooooooth…

Stabilization Basics Compute homographies between successive images in your sequence. Transform sequence images one by one to make a continuous, smooth stream. Problems: error accumulates, especially with more degrees of freedom (e.g. affine transformations). Many papers are about dealing with this. H

Mosaicing Video Sequences Netzer, Gotsman Suggests using a sliding window of multiple images to compute more accurate registration of each frame –We ignore this, too computationally intensive, introduces lag in images Treat each new image as one of translation, rigid, similarity, affine, or projective transformation. Try each, pick the one with the lowest error, with some bonus to “simpler” transformations. –Instead of trying all 5 on each frame at run time, we did some trials and found rigid transformations satisfactory.

Camera Stabilization Based on 2.5D Motion Estimation and Inertial Motion Filtering Zhu, Xu, Yang, Jin Typically, camera movements fall into a few classes of motion. (e.g. panning, dolly, tracking,…) We can pass through movement on the dominant dimension and stabilize on the non-dominant dimension. –Since our motions are typically constrained to the horizontal plane, we can compensate for vertical bouncing and rotation, but leave horizontal motion unchecked.

Our approach AIBO vibration is very regular. Rotation oscillates around 0°. Vertical bouncing oscillates around a fixed value. Horizontal bouncing oscillates around a value determined by AIBO’s turning and sidestepping velocities (which we know!). So…

It’s a control problem now! Over many frames, image motion should tend toward fixed (or predictable) values. Use an “image placement controller” that allows high-frequency changes in placement, but enforces this constraint. Specifically, we extract and adjust the x,y coordinates and  rotation used for image registration.

The obligatory flowchart << PastFuture >> Image N-1Image N Compute H’ for N  N-1 Precomputed H’’ for N-1  1 H = H’’ * H’ (for N  1) Correct H for drift Transform and show Image N

Q: How do you find corresponding points in Image N and Image N-1? A: Andrew and Ranjith’s RANSAC from Assignment 5. Q: Oh. A: It’s pretty robust, even to blurry, smeared images. How to find corresponding points

Why find H indirectly? We could simply find the H from Image N to the corrected, transformed Image N-1, right? Wrong-o! The corrected image is jagged, noisy. RANSAC would freak. Instead, first find H’ from Image N to the normal Image N-1. Then premultiply it with the accumulated H (H’’) from the normal Image N-1 to Image 1.

How do we get x, y, and  ? It’s easy if our “homography” is just a rigid transform. It’s easy to adjust them, too. A cop out? Perhaps… it doesn’t fix all the aberrations in the AIBO image. It’s a start, though. arccotan 

Finding rigid transformations (1 of 2) Step 1: Constrained least squares. The U matrix P: Image N point u,v: N+1 point

Finding rigid transformations (2 of 2) Step 2: Throw away image scaling. Divide a, b, c, d, and e by e, then by the length of [a b]. Otherwise the image will shrink as you walk forward. [a/e b/e] is not constrained by constrained least squares to be of length 1.

Fighting drift, step by step Isolate  from H Apply really simple correction: premultiply H by a rotation matrix that rotates by -  /constant (forcing it back toward 0). Isolate t x and t y from H Apply similar correction. We should force t y toward a predicted value based on turning and sidestepping. We don’t right now, forcing it to 0 instead.

Demo time! Hallway scene –Normal, stabilized, and side-by-side Lab scene –Normal, stabilized, and side-by-side

Any questions?