Mosaics Today’s Readings Szeliski and Shum paper (sections 1 and 2, skim the rest) –http://www.cs.washington.edu/education/courses/455/08wi/readings/szeliskiShum97.pdfhttp://www.cs.washington.edu/education/courses/455/08wi/readings/szeliskiShum97.pdf.

Slides:



Advertisements
Similar presentations
Summary of Friday A homography transforms one 3d plane to another 3d plane, under perspective projections. Those planes can be camera imaging planes or.
Advertisements

RANSAC and mosaic wrap-up cs129: Computational Photography James Hays, Brown, Fall 2012 © Jeffrey Martin (jeffrey-martin.com) most slides from Steve Seitz,
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2011 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
RANSAC and mosaic wrap-up cs195g: Computational Photography James Hays, Brown, Spring 2010 © Jeffrey Martin (jeffrey-martin.com) most slides from Steve.
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
Lecture 11: Transformations CS4670/5760: Computer Vision Kavita Bala.
Lecture 14: Panoramas CS4670: Computer Vision Noah Snavely What’s inside your fridge?
Announcements Project 2 Out today Sign up for a panorama kit ASAP! –best slots (weekend) go quickly...
Announcements Project 1 artifact voting ( announce later today) Project 2 out today (help session at end of class) IMPORTANT: choose Proj 2 partner.
Lecture 8: Geometric transformations CS4670: Computer Vision Noah Snavely.
Lecture 4: Feature matching
Lecture 7: Image Alignment and Panoramas CS6670: Computer Vision Noah Snavely What’s inside your fridge?
Lecture 6: Image transformations and alignment CS6670: Computer Vision Noah Snavely.
Lecture 11: Structure from motion CS6670: Computer Vision Noah Snavely.
Lecture 9: Image alignment CS4670: Computer Vision Noah Snavely
Image Stitching and Panoramas
Announcements Project 2 out today panorama signup help session at end of class Today mosaic recap blending.
Announcements Project 1 artifact voting Project 2 out today panorama signup help session at end of class Guest lectures next week: Li Zhang, Jiwon Kim.
1Jana Kosecka, CS 223b Cylindrical panoramas Cylindrical panoramas with some slides from R. Szeliski, S. Seitz, D. Lowe, A. Efros,
CSCE 641 Computer Graphics: Image Mosaicing Jinxiang Chai.
Lecture 3a: Feature detection and matching CS6670: Computer Vision Noah Snavely.
Lecture 8: Image Alignment and RANSAC
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2005 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
Panoramas and Calibration : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Rick Szeliski.
Image Warping and Mosacing : Rendering and Image Processing Alexei Efros …with a lot of slides stolen from Steve Seitz and Rick Szeliski.
Announcements Project 1 Grading session this afternoon Artifacts due Friday (voting TBA) Project 2 out (online) Signup for panorama kits ASAP (weekend.
More Mosaic Madness : Computational Photography Alexei Efros, CMU, Fall 2006 © Jeffrey Martin (jeffrey-martin.com) with a lot of slides stolen from.
Image Stitching II Ali Farhadi CSE 455
CS4670: Computer Vision Kavita Bala Lecture 8: Scale invariance.
Announcements Project 1 artifact voting Project 2 out today (help session at end of class)
Lecture 6: Feature matching and alignment CS4670: Computer Vision Noah Snavely.
Lecture 12: Structure from motion CS6670: Computer Vision Noah Snavely.
Image Stitching Ali Farhadi CSEP 576 Several slides from Rick Szeliski, Steve Seitz, Derek Hoiem, and Ira Kemelmacher.
Announcements Midterm due Friday, beginning of lecture Guest lecture on Friday: Antonio Criminisi, Microsoft Research.
Mosaics CSE 455, Winter 2010 February 8, 2010 Neel Joshi, CSE 455, Winter Announcements  The Midterm went out Friday  See to the class.
Image Stitching Ali Farhadi CSE 455
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Mosaics Today’s Readings Szeliski, Ch 5.1, 8.1 StreetView.
Image stitching Digital Visual Effects Yung-Yu Chuang with slides by Richard Szeliski, Steve Seitz, Matthew Brown and Vaclav Hlavac.
Image Stitching Ali Farhadi CSE 576 Several slides from Rick Szeliski, Steve Seitz, Derek Hoiem, and Ira Kemelmacher.
CS4670 / 5670: Computer Vision KavitaBala Lecture 17: Panoramas.
Lecture 7: Features Part 2 CS4670/5670: Computer Vision Noah Snavely.
Mosaics part 3 CSE 455, Winter 2010 February 12, 2010.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Image Stitching II Linda Shapiro CSE 455. RANSAC for Homography Initial Matched Points.
Lecture 15: Transforms and Alignment CS4670/5670: Computer Vision Kavita Bala.
Image Stitching Computer Vision CS 691E Some slides from Richard Szeliski.
Image Warping 2D Geometric Transformations
776 Computer Vision Jan-Michael Frahm Spring 2012.
Image Stitching II Linda Shapiro EE/CSE 576.
CS4670 / 5670: Computer Vision Kavita Bala Lecture 20: Panoramas.
COSC579: Image Align, Mosaic, Stitch
Lecture 7: Image alignment
Feature description and matching
RANSAC and mosaic wrap-up
Image Stitching Slides from Rick Szeliski, Steve Seitz, Derek Hoiem, Ira Kemelmacher, Ali Farhadi.
Idea: projecting images onto a common plane
Announcements Project 2 out today (help session at end of class)
More Mosaic Madness : Computational Photography
Image Stitching II Linda Shapiro EE/CSE 576.
Image Stitching II Linda Shapiro CSE 455.
Image Stitching Computer Vision CS 678
Announcements Project 1 Project 2 Vote for your favorite artifacts!
Announcements Project 1 artifact voting Project 2 out today
Image Stitching II Linda Shapiro EE/CSE 576.
Computational Photography
Feature descriptors and matching
Image Stitching Linda Shapiro ECE/CSE 576.
Image Stitching II Linda Shapiro ECE P 596.
Image Stitching Linda Shapiro ECE P 596.
Presentation transcript:

Mosaics Today’s Readings Szeliski and Shum paper (sections 1 and 2, skim the rest) – VR Seattle: Full screen panoramas (cubic): Mars:

Image Mosaics + + … += Goal Stitch together several images into a seamless composite

How to do it? Basic Procedure Take a sequence of images from the same position –Rotate the camera about its optical center Compute transformation between second image and first Shift the second image to overlap with the first Blend the two together to create a mosaic If there are more images, repeat

Project 2 1.Take pictures on a tripod (or handheld) 2.Warp to spherical coordinates 3.Extract features 4.Align neighboring pairs using RANSAC 5.Write out list of neighboring translations 6.Correct for drift 7.Read in warped images and blend them 8.Crop the result and import into a viewer Roughly based on Autostitch By Matthew Brown and David Lowe

Aligning images How to account for warping? Translations are not enough to align the images Photoshop demo

mosaic PP Image reprojection The mosaic has a natural interpretation in 3D The images are reprojected onto a common plane The mosaic is formed on this plane

Image reprojection Basic question How to relate two images from the same camera center? –how to map a pixel from PP1 to PP2 PP2 PP1 Answer Cast a ray through each pixel in PP1 Draw the pixel where that ray intersects PP2 Don’t need to know what’s in the scene!

Image reprojection Observation Rather than thinking of this as a 3D reprojection, think of it as a 2D image warp from one image to another

Homographies Perspective projection of a plane Lots of names for this: –homography, texture-map, colineation, planar projective map Modeled as a 2D warp using homogeneous coordinates Hpp’ To apply a homography H Compute p’ = Hp (regular matrix multiply) Convert p’ from homogeneous to image coordinates –divide by w (third) coordinate

Image warping with homographies image plane in frontimage plane below black area where no pixel maps to

Panoramas What if you want a 360  field of view? mosaic Projection Sphere

Map 3D point (X,Y,Z) onto sphere Spherical projection X Y Z unit sphere unwrapped sphere Convert to spherical coordinates Spherical image Convert to spherical image coordinates –s defines size of the final image »often convenient to set s = camera focal length

Spherical reprojection Y Z X side view top-down view to How to map sphere onto a flat image?

Spherical reprojection Y Z X side view top-down view to –Use image projection matrix! –or use the version of projection that properly accounts for radial distortion, as discussed in projection slides. This is what you’ll do for project 2. How to map sphere onto a flat image?

Correcting radial distortion from Helmut DerschHelmut Dersch

Modeling distortion To model lens distortion Use above projection operation instead of standard projection matrix multiplication Apply radial distortion Apply focal length translate image center Project to “normalized” image coordinates

f = 200 (pixels) Spherical reprojection Map image to spherical coordinates need to know the focal length inputf = 800f = 400

Aligning spherical images Suppose we rotate the camera by  about the vertical axis How does this change the spherical image?

Aligning spherical images Suppose we rotate the camera by  about the vertical axis How does this change the spherical image? Translation by  This means that we can align spherical images by translation

Spherical image stitching What if you don’t know the camera rotation? Solve for the camera rotations –Note that a pan (rotation) of the camera is a translation of the sphere! –Use feature matching to solve for translations of spherical-warped images

Project 2 1.Take pictures on a tripod (or handheld) 2.Warp to spherical coordinates 3.Extract features 4.Align neighboring pairs using RANSAC 5.Write out list of neighboring translations 6.Correct for drift 7.Read in warped images and blend them 8.Crop the result and import into a viewer Roughly based on Autostitch By Matthew Brown and David Lowe

Feature detection summary Here’s what you do Compute the gradient at each point in the image Create the H matrix from the entries in the gradient Compute the eigenvalues. Find points with large response ( - > threshold) Choose those points where - is a local maximum as features

The Harris operator - is a variant of the “Harris operator” for feature detection The trace is the sum of the diagonals, i.e., trace(H) = h 11 + h 22 Very similar to - but less expensive (no square root) Called the “Harris Corner Detector” or “Harris Operator” Lots of other detectors, this is one of the most popular

The Harris operator Harris operator

Take 40x40 square window around detected feature Scale to 1/5 size (using prefiltering) Rotate to horizontal Sample 8x8 square window centered at feature Intensity normalize the window by subtracting the mean, dividing by the standard deviation in the window CSE 576: Computer Vision Multiscale Oriented PatcheS descriptor 8 pixels 40 pixels Adapted from slide by Matthew Brown

Feature matching Given a feature in I 1, how to find the best match in I 2 ? 1.Define distance function that compares two descriptors 2.Test all the features in I 2, find the one with min distance

Feature distance How to define the difference between two features f 1, f 2 ? Simple approach is SSD(f 1, f 2 ) –sum of square differences between entries of the two descriptors –can give good scores to very ambiguous (bad) matches I1I1 I2I2 f1f1 f2f2

Feature distance How to define the difference between two features f 1, f 2 ? Better approach: ratio distance = SSD(f 1, f 2 ) / SSD(f 1, f 2 ’) –f 2 is best SSD match to f 1 in I 2 –f 2 ’ is 2 nd best SSD match to f 1 in I 2 –gives small values for ambiguous matches I1I1 I2I2 f1f1 f2f2 f2'f2'

Evaluating the results How can we measure the performance of a feature matcher? feature distance

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 30 Computing image translations What do we do about the “bad” matches?

Project 2 1.Take pictures on a tripod (or handheld) 2.Warp to spherical coordinates 3.Extract features 4.Align neighboring pairs using RANSAC 5.Write out list of neighboring translations 6.Correct for drift 7.Read in warped images and blend them 8.Crop the result and import into a viewer Roughly based on Autostitch By Matthew Brown and David Lowe

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 32 RAndom SAmple Consensus Select one match, count inliers (in this case, only one)

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 33 RAndom SAmple Consensus Select one match, count inliers (4 inliers)

Richard SzeliskiCSE 576 (Spring 2005): Computer Vision 34 Least squares fit Find “average” translation vector for largest set of inliers

RANSAC Same basic approach works for any transformation Translation, rotation, homographies, etc. Very useful tool General version Randomly choose a set of K correspondences –Typically K is the minimum size that lets you fit a model Fit a model (e.g., homography) to those correspondences Count the number of inliers that “approximately” fit the model –Need a threshold on the error Repeat as many times as you can Choose the model that has the largest set of inliers Refine the model by doing a least squares fit using ALL of the inliers

Project 2 1.Take pictures on a tripod (or handheld) 2.Warp to spherical coordinates 3.Extract features 4.Align neighboring pairs using RANSAC 5.Write out list of neighboring translations 6.Correct for drift 7.Read in warped images and blend them 8.Crop the result and import into a viewer Roughly based on Autostitch By Matthew Brown and David Lowe

Assembling the panorama Stitch pairs together, blend, then crop

Problem: Drift Error accumulation small errors accumulate over time

Project 2 1.Take pictures on a tripod (or handheld) 2.Warp to spherical coordinates 3.Extract features 4.Align neighboring pairs using RANSAC 5.Write out list of neighboring translations 6.Correct for drift 7.Read in warped images and blend them 8.Crop the result and import into a viewer Roughly based on Autostitch By Matthew Brown and David Lowe

Problem: Drift Solution add another copy of first image at the end this gives a constraint: y n = y 1 there are a bunch of ways to solve this problem –add displacement of (y 1 – y n )/(n -1) to each image after the first –compute a global warp: y’ = y + ax –run a big optimization problem, incorporating this constraint »best solution, but more complicated »known as “bundle adjustment” (x 1,y 1 ) copy of first image (x n,y n )

Full-view Panorama

Different projections are possible

Image Blending

Project 2 1.Take pictures on a tripod (or handheld) 2.Warp to spherical coordinates 3.Extract features 4.Align neighboring pairs using RANSAC 5.Write out list of neighboring translations 6.Correct for drift 7.Read in warped images and blend them 8.Crop the result and import into a viewer Roughly based on Autostitch By Matthew Brown and David Lowe

Feathering =

Effect of window size 0 1 left right 0 1

Effect of window size

Good window size 0 1 “Optimal” window: smooth but not ghosted Doesn’t always work...

Pyramid blending Create a Laplacian pyramid, blend each level Burt, P. J. and Adelson, E. H., A multiresolution spline with applications to image mosaics, ACM Transactions on Graphics, 42(4), October 1983, A multiresolution spline with applications to image mosaics

Encoding blend weights: I(x,y) = (  R,  G,  B,  ) color at p = Implement this in two steps: 1. accumulate: add up the (  premultiplied) RGB  values at each pixel 2. normalize: divide each pixel’s accumulated RGB by its  value Q: what if  = 0? Alpha Blending Optional: see Blinn (CGA, 1994) for details: er=7531&prod=JNL&arnumber=310740&arSt=83&ared=87&a rAuthor=Blinn%2C+J.Fhttp://ieeexplore.ieee.org/iel1/38/7531/ pdf?isNumb er=7531&prod=JNL&arnumber=310740&arSt=83&ared=87&a rAuthor=Blinn%2C+J.F. I1I1 I2I2 I3I3 p

Poisson Image Editing For more info: Perez et al, SIGGRAPH

Image warping Given a coordinate transform (x’,y’) = h(x,y) and a source image f(x,y), how do we compute a transformed image g(x’,y’) = f(h(x,y))? xx’ h(x,y) f(x,y)g(x’,y’) yy’

f(x,y)g(x’,y’) Forward warping Send each pixel f(x,y) to its corresponding location (x’,y’) = h(x,y) in the second image xx’ h(x,y) Q: what if pixel lands “between” two pixels? yy’

f(x,y)g(x’,y’) Forward warping Send each pixel f(x,y) to its corresponding location (x’,y’) = h(x,y) in the second image xx’ h(x,y) Q: what if pixel lands “between” two pixels? yy’ A: distribute color among neighboring pixels (x’,y’) –Known as “splatting”

f(x,y)g(x’,y’) x y Inverse warping Get each pixel g(x’,y’) from its corresponding location (x,y) = h -1 (x’,y’) in the first image xx’ Q: what if pixel comes from “between” two pixels? y’ h -1 (x,y)

f(x,y)g(x’,y’) x y Inverse warping Get each pixel g(x’,y’) from its corresponding location (x,y) = h -1 (x’,y’) in the first image xx’ h -1 (x,y) Q: what if pixel comes from “between” two pixels? y’ A: resample color value –We discussed resampling techniques before nearest neighbor, bilinear, Gaussian, bicubic

Forward vs. inverse warping Q: which is better? A: usually inverse—eliminates holes however, it requires an invertible warp function—not always possible...

Project 2 1.Take pictures on a tripod (or handheld) 2.Warp to spherical coordinates 3.Extract features 4.Align neighboring pairs using RANSAC 5.Write out list of neighboring translations 6.Correct for drift 7.Read in warped images and blend them 8.Crop the result and import into a viewer Roughly based on Autostitch By Matthew Brown and David Lowe

Other types of mosaics Can mosaic onto any surface if you know the geometry See NASA’s Visible Earth project for some stunning earth mosaicsVisible Earth project – –Click for images…images