Recognising Panoramas

Slides:



Advertisements
Similar presentations
Distinctive Image Features from Scale-Invariant Keypoints
Advertisements

Recognising Panoramas M. Brown and D. Lowe, University of British Columbia.
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Presented by Xinyu Chang
Summary of Friday A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging planes or.
Outline Feature Extraction and Matching (for Larger Motion)
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Distinctive Image Features from Scale- Invariant Keypoints Mohammad-Amin Ahantab Technische Universität München, Germany.
Instructor: Mircea Nicolescu Lecture 15 CS 485 / 685 Computer Vision.
Matching with Invariant Features
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Fast High-Dimensional Feature Matching for Object Recognition David Lowe Computer Science Department University of British Columbia.
1 Interest Operators Find “interesting” pieces of the image –e.g. corners, salient regions –Focus attention of algorithms –Speed up computation Many possible.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
1 Interest Operator Lectures lecture topics –Interest points 1 (Linda) interest points, descriptors, Harris corners, correlation matching –Interest points.
SIFT Guest Lecture by Jiwon Kim
Automatic Panoramic Image Stitching using Local Features Matthew Brown and David Lowe, University of British Columbia.
Distinctive Image Feature from Scale-Invariant KeyPoints
Computer Vision A Hand-Held “Scanner” for Large-Format Images COMP 256 Adrian Ilie Steps Towards.
Distinctive image features from scale-invariant keypoints. David G. Lowe, Int. Journal of Computer Vision, 60, 2 (2004), pp Presented by: Shalomi.
Automatic Image Stitching using Invariant Features Matthew Brown and David Lowe, University of British Columbia.
Scale Invariant Feature Transform (SIFT)
Automatic Matching of Multi-View Images
NIPS 2003 Tutorial Real-time Object Recognition using Invariant Local Image Features David Lowe Computer Science Department University of British Columbia.
Automatic Image Alignment via Motion Estimation
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Panorama Stitching and Augmented Reality. Local feature matching with large datasets n Examples: l Identify all panoramas and objects in an image set.
Distinctive Image Features from Scale-Invariant Keypoints David G. Lowe – IJCV 2004 Brien Flewelling CPSC 643 Presentation 1.
Sebastian Thrun CS223B Computer Vision, Winter Stanford CS223B Computer Vision, Winter 2005 Lecture 3 Advanced Features Sebastian Thrun, Stanford.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
Summary of Previous Lecture A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging.
Feature Matching and RANSAC : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and Rick Szeliski.
Overview Introduction to local features
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
Advanced Computer Vision Feature-based Alignment Lecturer: Lu Yi & Prof. Fuh CSIE NTU.
Object Tracking/Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition.
Image Stitching Shangliang Jiang Kate Harrison. What is image stitching?
Overview Harris interest points Comparing interest points (SSD, ZNCC, SIFT) Scale & affine invariant interest points Evaluation and comparison of different.
Local invariant features Cordelia Schmid INRIA, Grenoble.
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
Example: line fitting. n=2 Model fitting Measure distances.
CVPR 2003 Tutorial Recognition and Matching Based on Local Invariant Features David Lowe Computer Science Department University of British Columbia.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Overview Introduction to local features Harris interest points + SSD, ZNCC, SIFT Scale & affine invariant interest point detectors Evaluation and comparison.
Mosaics part 3 CSE 455, Winter 2010 February 12, 2010.
Local features: detection and description
Automatic Image Alignment : Computational Photography Alexei Efros, CMU, Fall 2011 with a lot of slides stolen from Steve Seitz and Rick Szeliski.
Distinctive Image Features from Scale-Invariant Keypoints
Presented by David Lee 3/20/2006
776 Computer Vision Jan-Michael Frahm Spring 2012.
Automatic Image Alignment with a lot of slides stolen from Steve Seitz and Rick Szeliski © Mike Nese CS194: Image Manipulation & Computational Photography.
Summary of Monday A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging planes or.
Recognizing specific objects Matching with SIFT Original suggestion Lowe, 1999,2004.
CSCI 631 – Foundations of Computer Vision March 15, 2016 Ashwini Imran Image Stitching.
Distinctive Image Features from Scale-Invariant Keypoints Presenter :JIA-HONG,DONG Advisor : Yen- Ting, Chen 1 David G. Lowe International Journal of Computer.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
SIFT.
SIFT Scale-Invariant Feature Transform David Lowe
Presented by David Lee 3/20/2006
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
Features Readings All is Vanity, by C. Allan Gilbert,
CAP 5415 Computer Vision Fall 2012 Dr. Mubarak Shah Lecture-5
SIFT Demo Heather Dunlop March 20, 2006.
Image Mosaicing Shiran Stan-Meleh
CSE 455 – Guest Lectures 3 lectures Contact Interest points 1
Feature Matching and RANSAC
Image Stitching Computer Vision CS 678
SIFT.
Computational Photography
Automatic Panoramic Image Stitching using Invariant Features
Presentation transcript:

Recognising Panoramas M. Brown and D. Lowe, University of British Columbia

Introduction Are you getting the whole picture? Compact Camera FOV = 50 x 35°

Introduction Are you getting the whole picture? Compact Camera FOV = 50 x 35° Human FOV = 200 x 135°

Introduction Are you getting the whole picture? Compact Camera FOV = 50 x 35° Human FOV = 200 x 135° Panoramic Mosaic = 360 x 180°

Why “Recognising Panoramas”?

Why “Recognising Panoramas”? 1D Rotations () Ordering  matching images

Why “Recognising Panoramas”? 1D Rotations () Ordering  matching images

Why “Recognising Panoramas”? 1D Rotations () Ordering  matching images

Why “Recognising Panoramas”? 1D Rotations () Ordering  matching images 2D Rotations (, ) Ordering  matching images

Why “Recognising Panoramas”? 1D Rotations () Ordering  matching images 2D Rotations (, ) Ordering  matching images

Why “Recognising Panoramas”? 1D Rotations () Ordering  matching images 2D Rotations (, ) Ordering  matching images

Why “Recognising Panoramas”?

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Bundle Adjustment and Multi-band Blending Outline Bundle Adjustment and Multi-band Blending Image Matching Feature Extraction Input Images

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment SIFT Features Nearest Neighbour Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment SIFT Features Nearest Neighbour Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Invariant Features Schmid & Mohr 1997, Lowe 1999, Baumberg 2000, Tuytelaars & Van Gool 2000, Mikolajczyk & Schmid 2001, Brown & Lowe 2002, Matas et. al. 2002, Schaffalitzky & Zisserman 2002

SIFT Features – Location Extrema of DoG Discard Low Contrast Discard Edge Points Output: x,y,σ Picture Credit: http://en.wikipedia.org/wiki/Scale-invariant_feature_transform

SIFT Features – Orientation Difference of Gaussian Function: G(x, y, kσ) = Gaussian Kernel with SD = kσ L(x, y, kσ) = I(x, y) * G(x, y, kσ) D(x, y, σ) = L(x, y, kiσ) – L(x, y, kjσ) Gradient Magnitude: m(x,y) = √{ (L(x + 1, y) – L(x - 1, y))2 + (L(x, y + 1) – L(x, y - 1))2 } Θ(x, y) = tan-1{ (L(x, y + 1) – L(x, y - 1)) / (L(x + 1, y) – L(x - 1, y)) }

SIFT Features – Orientation Local Gradients Adjusted Gradients Gaussian Weighting Orientation Histogram – 36 bins with 10 degrees each

SIFT Features – Orientation Local Gradients Adjusted Gradients Gaussian Weighting Orientation Histogram – 36 bins with 10 degrees each θ

SIFT Features – Descriptor Vector 4 x 4 region around point oriented according to θ Take 8 versions of the gradient image corresponding to 8 different directions. Each version has only gradients that fall in corresponding direction range Concatenate the 8 versions (each 4 x 4) together to get 128 dimensional descriptor vector

SIFT Features Invariant Features SIFT features are… Establish invariant frame Maxima/minima of scale-space DOG  x, y, s Maximum of distribution of local gradients   Form descriptor vector Histogram of smoothed local gradients 128 dimensions SIFT features are… Geometrically invariant to similarity transforms, some robustness to affine change Photometrically invariant to affine changes in intensity

Overview Feature Matching Image Matching Bundle Adjustment SIFT Features Nearest Neighbour Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Nearest Neighbour Matching Find k-NN for each feature k  number of overlapping images (we use k = 4) Use k-d tree k-d tree recursively bi-partitions data at mean in the dimension of maximum variance Approximate nearest neighbours found in O(nlogn)

Overview Feature Matching Image Matching Bundle Adjustment SIFT Features Nearest Neighbour Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment RANSAC for Homography Probabilistic model for verification Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment RANSAC for Homography Probabilistic model for verification Bundle Adjustment Multi-band Blending Results Conclusions

Homography Defn – Homography: A one-to-one mapping between two images. In computer vision, it is typically used to describe the correspondence between two images taken of the same scene from different camera angles. For Images: Homography has four parameters, θ1 θ2 θ3 f, corresponding to the three angles of camera rotation and the focal length. Picture Credit: http://weboflife.nasa.gov/learningResources/vestibularbrief.htm

RANSAC (RANdom SAmple Consensus) bestH = 0% success While i < max iterations: - Select a random sample of the data - currentH = solve for the homography using sample data - for each pair of points, test to see if currentH is a suitable solution - if more points work with currentH than with bestH set bestH = currentH

RANSAC for Homography

RANSAC for Homography

RANSAC for Homography

Overview Feature Matching Image Matching Bundle Adjustment RANSAC for Homography Probabilistic model for verification Bundle Adjustment Multi-band Blending Results Conclusions

Probabilistic model for verification

Probabilistic model for verification Want to solve for the probability that the image match is correct given the set of RANSAC inliers and outliers. p( match | inliers ) Using Bayes’ Rule: p( match | inliers ) = p( inliers | match ) * p( match ) / p( inliers ) This can be simplified if we make some assumptions p( inliers | match ) = B( ni; nf, p1) p( inliers | ~match ) = B( ni; nf, p0) (B is a binomial distribution function)

Probabilistic model for verification Compare probability that this set of RANSAC inliers/outliers was generated by a correct/false image match ni = #inliers, nf = #features p1 = p(inlier | match), p0 = p(inlier | ~match) pmin = acceptance probability Choosing values for p1, p0 and pmin

Finding the panoramas

Finding the panoramas

Finding the panoramas

Finding the panoramas

Overview Feature Matching Image Matching Bundle Adjustment RANSAC for Homography Probabilistic model for verification Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Bundle Adjustment Overlay the images together one at a time. For each image, solve for the camera parameters θ1 θ2 θ3 f to align it with the current compilation using Levenberg-Marquardt to minimize an error function.

Overview Feature Matching Image Matching Bundle Adjustment Error function Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment Error function Multi-band Blending Results Conclusions

Error function Sum of squared projection errors Robust error function n = #images I(i) = set of image matches to image i F(i, j) = set of feature matches between images i,j rijk = residual of kth feature match between images i,j Robust error function

Homography for Rotation Parameterise each camera by rotation and focal length This gives pairwise homographies

Bundle Adjustment New images initialised with rotation, focal length of best matching image

Bundle Adjustment New images initialised with rotation, focal length of best matching image

Overview Feature Matching Image Matching Bundle Adjustment Error function Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Multi-band Blending Burt & Adelson 1983 Blend frequency bands over range  

2-band Blending Low frequency ( > 2 pixels) High frequency ( < 2 pixels)

Linear Blending

2-band Blending

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Results

Results

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Overview Feature Matching Image Matching Bundle Adjustment Multi-band Blending Results Conclusions

Conclusions Fully automatic panoramas Invariant feature based method A recognition problem… Invariant feature based method SIFT features, RANSAC, Bundle Adjustment, Multi- band Blending O(nlogn) Future Work Advanced camera modelling radial distortion, camera motion, scene motion, vignetting, exposure, high dynamic range, flash … Full 3D case – recognising 3D objects/scenes in unordered datasets http://www.cs.ubc.ca/~mbrown/panorama/panorama.html

Questions? http://www.cs.ubc.ca/~mbrown/panorama/panorama.html

Analytical computation of derivatives

Levenberg-Marquardt Iteration step of form