Fitting Thursday, Sept 24 Kristen Grauman UT-Austin.

Slides:



Advertisements
Similar presentations
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Advertisements

Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Hough Transform Reading Watt, An edge is not a line... How can we detect lines ?
Edge Detection CSE P 576 Larry Zitnick
Beyond bags of features: Part-based models Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Fitting: The Hough transform
Lecture 5 Hough transform and RANSAC
CS 558 C OMPUTER V ISION Lecture VIII: Fitting, RANSAC, and Hough Transform Adapted from S. Lazebnik.
1 Model Fitting Hao Jiang Computer Science Department Oct 6, 2009.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
1Ellen L. Walker Segmentation Separating “content” from background Separating image into parts corresponding to “real” objects Complete segmentation Each.
Announcements Mailing list: –you should have received messages Project 1 out today (due in two weeks)
1 Model Fitting Hao Jiang Computer Science Department Oct 8, 2009.
Beyond bags of features: Adding spatial information Many slides adapted from Fei-Fei Li, Rob Fergus, and Antonio Torralba.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
Fitting a Model to Data Reading: 15.1,
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
Fitting: The Hough transform
Edge Detection.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Computer Vision More Image Features Hyunki Hong School of Integrative Engineering.
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Fitting & Matching Lecture 4 – Prof. Bregler Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
HOUGH TRANSFORM Presentation by Sumit Tandon
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
Fitting : Voting and the Hough Transform Monday, Feb 14 Prof. Kristen Grauman UT-Austin.
Object Detection 01 – Advance Hough Transformation JJCAO.
CS 1699: Intro to Computer Vision Matching and Fitting Prof. Adriana Kovashka University of Pittsburgh September 29, 2015.
Generalized Hough Transform
Image Processing Edge detection Filtering: Noise suppresion.
Fitting: Deformable contours Tuesday, September 22 th 2015 Devi Parikh Virginia Tech 1 Slide credit: Kristen Grauman Disclaimer: Many slides have been.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Fitting: The Hough transform
Many slides from Steve Seitz and Larry Zitnick
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
EECS 274 Computer Vision Model Fitting. Fitting Choose a parametric object/some objects to represent a set of points Three main questions: –what object.
1 Model Fitting Hao Jiang Computer Science Department Sept 30, 2014.
Object Detection 01 – Basic Hough Transformation JJCAO.
Lecture 08 27/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
776 Computer Vision Jan-Michael Frahm Spring 2012.
Fitting : Voting and the Hough Transform Thursday, September 17 th 2015 Devi Parikh Virginia Tech 1 Slide credit: Kristen Grauman Disclaimer: Many slides.
CS 2750: Machine Learning Linear Regression Prof. Adriana Kovashka University of Pittsburgh February 10, 2016.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Hough Transform. Introduction The Hough Transform is a general technique for extracting geometrical primitives from e.g. edge data Performed after Edge.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
Object Recognition. Segmentation –Roughly speaking, segmentation is to partition the images into meaningful parts that are relatively homogenous in certain.
CS 558 C OMPUTER V ISION Fitting, RANSAC, and Hough Transfrom Adapted from S. Lazebnik.
776 Computer Vision Jan-Michael Frahm Spring 2012.
CS 2770: Computer Vision Fitting Models: Hough Transform & RANSAC
Segmentation by fitting a model: Hough transform and least squares
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Fitting: Voting and the Hough Transform
Fitting: The Hough transform
Fitting: Voting and the Hough Transform (part 2)
Fitting: Voting and the Hough Transform April 24th, 2018
Fitting Curve Models to Edges
Prof. Adriana Kovashka University of Pittsburgh October 19, 2016
Edge Detection CSE 455 Linda Shapiro.
Hough Transform.
Edge Detection Today’s readings Cipolla and Gee Watt,
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Prof. Adriana Kovashka University of Pittsburgh September 26, 2019
Presentation transcript:

Fitting Thursday, Sept 24 Kristen Grauman UT-Austin

Last time Grouping / segmentation to identify coherent image regions –Clustering algorithms K-means Graph cuts; normalized cuts criterion –Feature spaces Color, intensity, texture, position…

Review What is the objective function for k-means; that is, what does the algorithm try to minimize? The best assignment of points to clusters would minimize the total distances between points and their assigned (nearest) centers.

Review Assuming we use a fully connected graph, what is the time complexity of computing the affinities for a graph cuts-based segmentation?

Fitting Want to associate a model with observed features [Fig from Marszalek & Schmid, 2007] For example, the model could be a line, a circle, or an arbitrary shape.

Fitting Choose a parametric model to represent a set of features Membership criterion is not local Can’t tell whether a point belongs to a given model just by looking at that point Three main questions: What model represents this set of features best? Which of several model instances gets which feature? How many model instances are there? Computational complexity is important It is infeasible to examine every possible set of parameters and every possible combination of features Source: L. Lazebnik

Example: Line fitting Why fit lines? Many objects characterized by presence of straight lines Wait, why aren’t we done just by running edge detection?

Extra edge points (clutter), multiple models: –which points go with which line, if any? Only some parts of each line detected, and some parts are missing: –how to find a line that bridges missing evidence? Noise in measured edge points, orientations: –how to detect true underlying parameters? Difficulty of line fitting

Voting It’s not feasible to check all combinations of features by fitting a model to each possible subset. Voting is a general technique where we let the features vote for all models that are compatible with it. –Cycle through features, cast votes for model parameters. –Look for model parameters that receive a lot of votes. Noise & clutter features will cast votes too, but typically their votes should be inconsistent with the majority of “good” features. Ok if some features not observed, as model can span multiple fragments.

Fitting lines Given points that belong to a line, what is the line? How many lines are there? Which points belong to which lines? Hough Transform is a voting technique that can be used to answer all of these Main idea: 1. Record all possible lines on which each edge point lies. 2. Look for lines that get many votes.

Finding lines in an image: Hough space Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: –given a set of points (x,y), find all (m,b) such that y = mx + b x y m b m0m0 b0b0 image spaceHough (parameter) space Slide credit: Steve Seitz

Finding lines in an image: Hough space Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: –given a set of points (x,y), find all (m,b) such that y = mx + b What does a point (x 0, y 0 ) in the image space map to? x y m b image spaceHough (parameter) space –Answer: the solutions of b = -x 0 m + y 0 –this is a line in Hough space x0x0 y0y0 Slide credit: Steve Seitz

Finding lines in an image: Hough space Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: –given a set of points (x,y), find all (m,b) such that y = mx + b What does a point (x 0, y 0 ) in the image space map to? x y m b image spaceHough (parameter) space –Answer: the solutions of b = -x 0 m + y 0 –this is a line in Hough space x0x0 y0y0

Finding lines in an image: Hough space What are the line parameters for the line that contains both (x 0, y 0 ) and (x 1, y 1 )? It is the intersection of the lines b = –x 0 m + y 0 and b = –x 1 m + y 1 x y m b image spaceHough (parameter) space x0x0 y0y0 b = –x 1 m + y 1 (x 0, y 0 ) (x 1, y 1 )

Finding lines in an image: Hough algorithm How can we use this to find the most likely parameters (m,b) for the most prominent line in the image space? Let each edge point in image space vote for a set of possible parameters in Hough space Accumulate votes in discrete set of bins; parameters with the most votes indicate line in image space. x y m b image spaceHough (parameter) space

Polar representation for lines : perpendicular distance from line to origin : angle the perpendicular makes with the x-axis Point in image space  sinusoid segment in Hough space [0,0] Issues with usual (m,b) parameter space: can take on infinite values, undefined for vertical lines.

Hough transform algorithm Using the polar parameterization: Basic Hough transform algorithm 1.Initialize H[d,  ]=0 2.for each edge point I[x,y] in the image for  = 0 to 180 // some quantization H[d,  ] += 1 3.Find the value(s) of (d,  ) where H[d,  ] is maximum 4.The detected line in the image is given by Hough line demo H: accumulator array (votes) d  Time complexity (in terms of number of votes)? Source: Steve Seitz

Image space edge coordinates Votes  d x y Example: Hough transform for straight lines Bright value = high vote count Black = no votes

Example: Hough transform for straight lines Circle : Square :

Example: Hough transform for straight lines

Showing longest segments found

Impact of noise on Hough Image space edge coordinates Votes  x y d What difficulty does this present for an implementation?

Image space edge coordinates Votes Impact of noise on Hough Here, everything appears to be “noise”, or random edge points, but we still see peaks in the vote space.

Extensions Extension 1: Use the image gradient 1.same 2.for each edge point I[x,y] in the image  = gradient at (x,y) H[d,  ] += 1 3.same 4.same (Reduces degrees of freedom) Extension 2 give more votes for stronger edges Extension 3 change the sampling of (d,  ) to give more/less resolution Extension 4 The same procedure can be used with circles, squares, or any other shape

Extensions Extension 1: Use the image gradient 1.same 2.for each edge point I[x,y] in the image compute unique (d,  ) based on image gradient at (x,y) H[d,  ] += 1 3.same 4.same (Reduces degrees of freedom) Extension 2 give more votes for stronger edges (use magnitude of gradient) Extension 3 change the sampling of (d,  ) to give more/less resolution Extension 4 The same procedure can be used with circles, squares, or any other shape…

Hough transform for circles For a fixed radius r, unknown gradient direction Circle: center (a,b) and radius r Image space Hough space

Hough transform for circles For a fixed radius r, unknown gradient direction Circle: center (a,b) and radius r Image space Hough space Intersection : most votes for center occur here.

Hough transform for circles For an unknown radius r, unknown gradient direction Circle: center (a,b) and radius r Hough space Image space b a r

Hough transform for circles For an unknown radius r, unknown gradient direction Circle: center (a,b) and radius r Hough space Image space b a r

Hough transform for circles For an unknown radius r, known gradient direction Circle: center (a,b) and radius r Hough spaceImage space θ x

Hough transform for circles For every edge pixel (x,y) : For each possible radius value r: For each possible gradient direction θ: // or use estimated gradient a = x – r cos(θ) b = y + r sin(θ) H[a,b,r] += 1 end

Example: detecting circles with Hough Crosshair indicates results of Hough transform, bounding box found via motion differencing.

Original Edges Example: detecting circles with Hough Votes: Penny Note: a different Hough transform (with separate accumulators) was used for each circle radius (quarters vs. penny).

Original Edges Example: detecting circles with Hough Votes: Quarter Combined detections Coin finding sample images from: Vivek Kwatra

Voting: practical tips Minimize irrelevant tokens first (take edge points with significant gradient magnitude) Choose a good grid / discretization –Too coarse: large votes obtained when too many different lines correspond to a single bucket –Too fine: miss lines because some points that are not exactly collinear cast votes for different buckets Vote for neighbors, also (smoothing in accumulator array) Utilize direction of edge to reduce free parameters by 1 To read back which points voted for “winning” peaks, keep tags on the votes.

Hough transform: pros and cons Pros All points are processed independently, so can cope with occlusion Some robustness to noise: noise points unlikely to contribute consistently to any single bin Can detect multiple instances of a model in a single pass Cons Complexity of search time increases exponentially with the number of model parameters Non-target shapes can produce spurious peaks in parameter space Quantization: hard to pick a good grid size

Generalized Hough transform What if want to detect arbitrary shapes defined by boundary points and a reference point? [Dana H. Ballard, Generalizing the Hough Transform to Detect Arbitrary Shapes, 1980] Image space x a p1p1 θ p2p2 θ At each boundary point, compute displacement vector: r = a – p i. For a given model shape: store these vectors in a table indexed by gradient orientation θ.

Generalized Hough transform To detect the model shape in a new image: For each edge point –Index into table with its gradient orientation θ –Use retrieved r vectors to vote for position of reference point Peak in this Hough space is reference point with most supporting edges Assuming translation is the only transformation here, i.e., orientation and scale are fixed.

Example model shape Source: L. Lazebnik Say we’ve already stored a table of displacement vectors as a function of edge orientation for this model shape.

Example displacement vectors for model points Now we want to look at some edge points detected in a new image, and vote on the position of that shape.

Example range of voting locations for test point

Example range of voting locations for test point

Example votes for points with θ =

Example displacement vectors for model points

Example range of voting locations for test point

votes for points with θ = Example

Application in recognition Instead of indexing displacements by gradient orientation, index by “visual codeword” B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004Combined Object Categorization and Segmentation with an Implicit Shape Model training image visual codeword with displacement vectors Source: L. Lazebnik

Application in recognition Instead of indexing displacements by gradient orientation, index by “visual codeword” B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004Combined Object Categorization and Segmentation with an Implicit Shape Model test image Source: L. Lazebnik

Implicit shape models: Training 1.Build codebook of patches around extracted interest points using clustering (more on this later in the course)

Implicit shape models: Training 1.Build codebook of patches around extracted interest points using clustering 2.Map the patch around each interest point to closest codebook entry

Implicit shape models: Training 1.Build codebook of patches around extracted interest points using clustering 2.Map the patch around each interest point to closest codebook entry 3.For each codebook entry, store all positions it was found, relative to object center

Implicit shape models: Testing 1.Given test image, extract patches, match to codebook entry 2.Cast votes for possible positions of object center 3.Search for maxima in voting space

Summary Grouping/segmentation useful to make a compact representation and merge similar features –associate features based on defined similarity measure and clustering objective Fitting problems require finding any supporting evidence for a model, even within clutter and missing features. –associate features with an explicit model Voting approaches, such as the Hough transform, make it possible to find likely model parameters without searching all combinations of features. –Hough transform approach for lines, circles, …, arbitrary shapes defined by a set of boundary points, recognition from patches.

Next Tuesday 9/29/08 –Background modeling and Bayesian classification –Guest lecture by Professor Aggarwal Thursday 10/2/08 –No class Next week: my office hours cancelled Harshdeep still available