1 Model Fitting Hao Jiang Computer Science Department Sept 30, 2014.

Slides:



Advertisements
Similar presentations
Object Recognition from Local Scale-Invariant Features David G. Lowe Presented by Ashley L. Kapron.
Advertisements

Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Feature Matching and Robust Fitting Computer Vision CS 143, Brown James Hays Acknowledgment: Many slides from Derek Hoiem and Grauman&Leibe 2008 AAAI Tutorial.
TP14 - Local features: detection and description Computer Vision, FCUP, 2014 Miguel Coimbra Slides by Prof. Kristen Grauman.
Object Recognition using Invariant Local Features Applications l Mobile robots, driver assistance l Cell phone location or object recognition l Panoramas,
Hough Transform Reading Watt, An edge is not a line... How can we detect lines ?
Image alignment Image from
Edge Detection CSE P 576 Larry Zitnick
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Fitting: The Hough transform
Robust and large-scale alignment Image from
Lecture 5 Hough transform and RANSAC
1 Model Fitting Hao Jiang Computer Science Department Oct 6, 2009.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
Image alignment.
1 Model Fitting Hao Jiang Computer Science Department Oct 8, 2009.
Object Recognition with Invariant Features n Definition: Identify objects or scenes and determine their pose and model parameters n Applications l Industrial.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
Fitting a Model to Data Reading: 15.1,
Scale Invariant Feature Transform (SIFT)
CS 376b Introduction to Computer Vision 04 / 14 / 2008 Instructor: Michael Eckmann.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Fitting: The Hough transform
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Lecture 10: Robust fitting CS4670: Computer Vision Noah Snavely.
Scale-Invariant Feature Transform (SIFT) Jinxiang Chai.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Fitting & Matching Lectures 7 & 8 – Prof. Fergus Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Image alignment.
CSE 185 Introduction to Computer Vision
Advanced Computer Vision Chapter 4 Feature Detection and Matching Presented by: 傅楸善 & 許承偉
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Fitting & Matching Lecture 4 – Prof. Bregler Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
Fitting : Voting and the Hough Transform Monday, Feb 14 Prof. Kristen Grauman UT-Austin.
Object Detection 01 – Advance Hough Transformation JJCAO.
CS 1699: Intro to Computer Vision Matching and Fitting Prof. Adriana Kovashka University of Pittsburgh September 29, 2015.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Fitting: The Hough transform
Model Fitting Computer Vision CS 143, Brown James Hays 10/03/11 Slides from Silvio Savarese, Svetlana Lazebnik, and Derek Hoiem.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
Fitting Thursday, Sept 24 Kristen Grauman UT-Austin.
Computer Vision - Fitting and Alignment (Slides borrowed from various presentations)
Lecture 08 27/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
CSE 185 Introduction to Computer Vision Feature Matching.
Local features: detection and description
776 Computer Vision Jan-Michael Frahm Spring 2012.
Fitting : Voting and the Hough Transform Thursday, September 17 th 2015 Devi Parikh Virginia Tech 1 Slide credit: Kristen Grauman Disclaimer: Many slides.
CS 2750: Machine Learning Linear Regression Prof. Adriana Kovashka University of Pittsburgh February 10, 2016.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
Invariant Local Features Image content is transformed into local feature coordinates that are invariant to translation, rotation, scale, and other imaging.
SIFT.
776 Computer Vision Jan-Michael Frahm Spring 2012.
CS 2770: Computer Vision Fitting Models: Hough Transform & RANSAC
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Fitting: Voting and the Hough Transform
Fitting: Voting and the Hough Transform (part 2)
Fitting: Voting and the Hough Transform April 24th, 2018
Fitting Curve Models to Edges
Object recognition Prof. Graeme Bailey
Photo by Carl Warner.
Prof. Adriana Kovashka University of Pittsburgh October 19, 2016
SIFT.
Hough Transform.
Edge Detection Today’s readings Cipolla and Gee Watt,
CSE 185 Introduction to Computer Vision
Presentation transcript:

1 Model Fitting Hao Jiang Computer Science Department Sept 30, 2014

Model Fitting  An object model constrains the kind of object we wish to find in images. 2 window Front wheel Back wheel

Type of 2D Models  Rigid models  Deformable models  Articulated models 3

Template Matching 4 Template Target image

Template Matching 5 Template Target image

Template Matching 6 Template Target image

Template Matching 7 Template Target image

Template Matching 8 Template Target image

Matching Criteria 9 At each location (x,y) the matching cost is:

Dealing with Rotations 10 Template Target image

Dealing with Scale Changes 11 Template Target image in multiple scales Sweep window and find the best match

Multi-scale Refinement 12 Template Target image in multiple scales Start from here Complexity reduced from O(n^2 k^2) to O(log(n/m) + m^2 j^2) size n size m size k size j

Multi-scale Refinement Generate the image pyramids 2. Exhaustive search the smallest image and get location (x,y) 3. Go to the next level of the pyramid and check the locations (x, y), (x+1,y), (x,y+1), (x+1,y+1), pick up the best and update the estimate (x,y) 4. If at the bottom of the pyramid, stop; otherwise go to Output the estimate (x,y)

Binary Template Matching 14 G.M. Gavrila, “Pedestrian detection from a moving vechicle”, ECCV 2000

Matching Binary Template 15 Template Binary target image Target object Clutter

Distance between Curves 16 d_red(p) d_green(q) For each point p on the red curve, find the closed point q on the green curve. Denote The distance from red curve to the green curve = Similarly, the distance from the green curve to the red curve is q p

Distance Transform  Transform binary image into an image whose pixel values equal the closest distance to the binary foreground. 17

Chamfer Matching 18 d_red(p) = distI(p) p

Chamfer Matching in Matlab 19 % distance transform and matching dist = bwdist(imTarget, ‘euclidean’); map = imfilter(dist, imTemplate, ‘corr’, ‘same’); %look for the local minima in map to locate objects smap = ordfilt2(map, 25, true(5)); [r, c] = find((map == smap) & (map < threshold));

Hough Transform  Template matching may become very complex if transformations such as rotation and scale is allowed.  A voting based method, Hough Transform, can be used to solve the problem.  Instead of trying to match the target at each location, we collect votes in a parameter space using features in the original image. 20

Line Detection 21

Line Detection 22

Line Detection 23 x y (1)Where is the line? (2)How many lines? (3)Which point belongs to which line?

Line Detection 24 x y y = ax + b For each point, there are many lines that pass it. (x1,y1)

Line Detection 25 a b b = -x1a + y1 All the lines passing (x1,y1) can be represented as another line in the parameter space. y1 y1/x1

Line Detection 26 a b b = -x1a + y1 The co-linear points vote for the same point in the a-b space. y y/x b = -x2a + y2

Line Detection 27 a b b = -x1a + y1 y y/x b = -x2a + y2 b = -x(n)a + y(n) The co-linear points vote for the same point in the a-b space. The point that receives the most votes corresponds to the line.

Line Parameterization 28 x y theta d x cos(theta) + y sin(theta) = d

Parameter Space 29 theta r x cos(theta) + y sin(theta) = r

Hough Transform Algorithm Using the polar parameterization: Basic Hough transform algorithm 1.Initialize H[d,  ]=0 2.for each edge point I[x,y] in the image for  = 0 to 180 // some quantization H[d,  ] += 1 3.Find the value(s) of (d,  ) where H[d,  ] is maximum 4.The detected line in the image is given by Hough line demo H: accumulator array (votes) d  Time complexity (in terms of number of votes)? Source: Steve Seitz

Image space edge coordinates Votes  d x y Example: Hough transform for straight lines Bright value = high vote count Black = no votes Source: K. Grauman

Impact of noise on Hough Image space edge coordinates Votes  x y d What difficulty does this present for an implementation? Source: K. Grauman

Image space edge coordinates Votes Impact of noise on Hough Here, everything appears to be “noise”, or random edge points, but we still see peaks in the vote space. Source: K. Grauman

Extensions Extension 1: Use the image gradient 1.same 2.for each edge point I[x,y] in the image  = gradient angle at (x,y) H[d,  ] += 1 3.same 4.same (Reduces degrees of freedom) Ex The same procedure can be used with circles, squares, or any other shape Source: K. Grauman

Extensions Extension 1: Use the image gradient 1.same 2.for each edge point I[x,y] in the image compute unique (d,  ) based on image gradient at (x,y) H[d,  ] += 1 3.same 4.same (Reduces degrees of freedom) Extension 2  give more votes for stronger edges (use magnitude of gradient) Extension 3  change the sampling of (d,  ) to give more/less resolution Extension 4  The same procedure can be used with circles, squares, or any other shape… Source: K. Grauman

Hough transform for circles  For a fixed radius r, unknown gradient direction Circle: center (a,b) and radius r Image space Hough space Source: K. Grauman

Hough transform for circles  For a fixed radius r, unknown gradient direction Circle: center (a,b) and radius r Image space Hough space Intersecti on: most votes for center occur here. Source: K. Grauman

Hough transform for circles  For an unknown radius r, unknown gradient direction Circle: center (a,b) and radius r Hough space Image space b a r Source: K. Grauman

Hough transform for circles  For an unknown radius r, unknown gradient direction Circle: center (a,b) and radius r Hough space Image space b a r Source: K. Grauman

Hough transform for circles  For an unknown radius r, known gradient direction Circle: center (a,b) and radius r Hough spaceImage space θ x Source: K. Grauman

Hough transform for circles For every edge pixel (x,y) : For each possible radius value r: For each possible gradient direction θ: // or use estimated gradient a = x – r cos(θ) b = y + r sin(θ) H[a,b,r] += 1 end Source: K. Grauman

Example: detecting circles with Hough Crosshair indicates results of Hough transform, bounding box found via motion differencing. Source: K. Grauman

Original Edges Example: detecting circles with Hough Votes: Penny Note: a different Hough transform (with separate accumulators) was used for each circle radius (quarters vs. penny). Source: K. Grauman

Original Edges Example: detecting circles with Hough Votes: Quarter Coin finding sample images from: Vivek Kwatra Source: K. Grauman

General Hough Transform  Hough transform can also be used to detection arbitrary shaped object. 45 TemplateTarget

General Hough Transform  Hough transform can also be used to detection arbitrary shaped object. 46 TemplateTarget

General Hough Transform  Hough transform can also be used to detection arbitrary shaped object. 47 TemplateTarget

General Hough Transform  Hough transform can also be used to detection arbitrary shaped object. 48 TemplateTarget

General Hough Transform  Rotation Invariance 49

Example model shape Source: L. Lazebnik Say we’ve already stored a table of displacement vectors as a function of edge orientation for this model shape.

Example displacement vectors for model points Now we want to look at some edge points detected in a new image, and vote on the position of that shape.

Example range of voting locations for test point

Example range of voting locations for test point

Example votes for points with θ =

Example displacement vectors for model points

Example range of voting locations for test point

votes for points with θ = Example

Application in recognition Instead of indexing displacements by gradient orientation, index by “visual codeword” B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape ModelCombined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004 training image visual codeword with displacement vectors Source: L. Lazebnik

Application in recognition Instead of indexing displacements by gradient orientation, index by “visual codeword” test image Source: L. Lazebnik

RANSAC  RANSAC – Random Sampling Consensus  Procedure:  Randomly generate a hypothesis  Test the hypothesis  Repeat for large enough times and choose the best one 60

Line Detection 61

Line Detection 62

Line Detection 63

Line Detection 64 Count “inliers”: 7

Line Detection 65 Inlier number: 3

Line Detection 66 After many trails, we decide this is the best line.

Object Detection Using RANSAC 67 Template Target

Scale and Rotation Invariant Feature  SIFT (D. Lowe) 68

Stable Feature 69

Stable Feature 70 Local max/min point’s values are stable when the scale changes

SIFT 71 Filtering the image using filters at different scales. (for example using Gaussian filter)

Difference of Gaussian 72

SIFT Feature Points 73 (b) Shows the points at the local max/min of DOG scale space for the image in (a).

Matching SIFT 74 Template

Matching SIFT 75 Target image

Matching SIFT Using Nearest Neighbor 76 Matching result

Matching SIFT Using Nearest Neighbor 77 Matching points whose ratio (best_match_cost / second_best_match_cost) < 0.7

RANSAC  Generate matching hypothesis 78 Template Target

RANSAC  Generate matching hypothesis 79 Template Target Randomly choose 3 pairs

RANSAC  Generate matching hypothesis 80 Template Target Transform all the template Points to the target image T

Affine Transformation 81 u = a x + b y + c v = d x + e y + f Point (x,y) is mapped to (u,v) by the linear function: Template Target T (x,y) (u,v)

Affine Transformation 82 u = a x + b y + c v = d x + e y + f Point (x,y) is mapped to (u,v) by the linear function: In matrix format: uvuv = a b d e xyxy + cfcf

Affine Transformation 83 u = a x + b y + c v = d x + e y + f Point (x,y) is mapped to (u,v) by the linear function: Using homogeneous coordinates: uv1uv1 = a b c d e f xy1xy1

Matlab 84 q = [x y 1] * t.tdata.T; % q = [u,v, 1] t = cp2tform(in_points, out_points, ‘affine’); src_points: (x1 y1; x2 y2; x3 y3; …) target_points: (u1 v1; u2 v2; u3 v3; …) Other transformations: ‘similarity’, ‘projective’

RANSAC  Validation 85 Template Target T Match points to the closes sift target points and compute the overall SIFT feature difference

Demo and Assignment Discussion 86 Template Target

Feature-based alignment outline Source: L. Lazebnik

Feature-based alignment outline Extract features Source: L. Lazebnik

Feature-based alignment outline Extract features Compute putative matches Source: L. Lazebnik

Feature-based alignment outline Extract features Compute putative matches Loop:  Hypothesize transformation T (small group of putative matches that are related by T) Source: L. Lazebnik

Feature-based alignment outline Extract features Compute putative matches Loop:  Hypothesize transformation T (small group of putative matches that are related by T)  Verify transformation (search for other matches consistent with T) Source: L. Lazebnik

Feature-based alignment outline Extract features Compute putative matches Loop:  Hypothesize transformation T (small group of putative matches that are related by T)  Verify transformation (search for other matches consistent with T) Source: L. Lazebnik