Fitting : Voting and the Hough Transform Thursday, September 17 th 2015 Devi Parikh Virginia Tech 1 Slide credit: Kristen Grauman Disclaimer: Many slides.

Slides:



Advertisements
Similar presentations
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Advertisements

Hough Transform Reading Watt, An edge is not a line... How can we detect lines ?
Edge Detection CSE P 576 Larry Zitnick
Fitting: The Hough transform
Image segmentation. The goals of segmentation Group together similar-looking pixels for efficiency of further processing “Bottom-up” process Unsupervised.
Segmentation. Terminology Segmentation, grouping, perceptual organization: gathering features that belong together Fitting: associating a model with observed.
1 Model Fitting Hao Jiang Computer Science Department Oct 6, 2009.
1 Image Recognition - I. Global appearance patterns Slides by K. Grauman, B. Leibe.
Announcements Mailing list: –you should have received messages Project 1 out today (due in two weeks)
Segmentation CSE P 576 Larry Zitnick Many slides courtesy of Steve Seitz.
1 Model Fitting Hao Jiang Computer Science Department Oct 8, 2009.
Announcements Project 2 more signup slots questions Picture taking at end of class.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
Fitting a Model to Data Reading: 15.1,
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
Perceptual Organization: Segmentation and Optical Flow.
Fitting: The Hough transform
Edge Detection.
Robust estimation Problem: we want to determine the displacement (u,v) between pairs of images. We are given 100 points with a correlation score computed.
Announcements vote for Project 3 artifacts Project 4 (due next Wed night) Questions? Late day policy: everything must be turned in by next Friday.
Announcements Project 3 questions Photos after class.
כמה מהתעשייה? מבנה הקורס השתנה Computer vision.
Graph-based Segmentation
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean Hall 5409 T-R 10:30am – 11:50am.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Computer Vision James Hays, Brown
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
Chapter 14: SEGMENTATION BY CLUSTERING 1. 2 Outline Introduction Human Vision & Gestalt Properties Applications – Background Subtraction – Shot Boundary.
Fitting: The Hough transform. Voting schemes Let each feature vote for all the models that are compatible with it Hopefully the noise features will not.
Fitting & Matching Lecture 4 – Prof. Bregler Slides from: S. Lazebnik, S. Seitz, M. Pollefeys, A. Effros.
HOUGH TRANSFORM Presentation by Sumit Tandon
Bag-of-features models. Origin 1: Texture recognition Texture is characterized by the repetition of basic elements or textons For stochastic textures,
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
Fitting : Voting and the Hough Transform Monday, Feb 14 Prof. Kristen Grauman UT-Austin.
Object Detection 01 – Advance Hough Transformation JJCAO.
CS 1699: Intro to Computer Vision Matching and Fitting Prof. Adriana Kovashka University of Pittsburgh September 29, 2015.
Image Processing Edge detection Filtering: Noise suppresion.
Fitting: Deformable contours Tuesday, September 22 th 2015 Devi Parikh Virginia Tech 1 Slide credit: Kristen Grauman Disclaimer: Many slides have been.
CS654: Digital Image Analysis Lecture 25: Hough Transform Slide credits: Guillermo Sapiro, Mubarak Shah, Derek Hoiem.
Image gradients and edges Tuesday, September 1 st 2015 Devi Parikh Virginia Tech Disclaimer: Many slides have been borrowed from Kristen Grauman, who may.
Fitting: The Hough transform
Many slides from Steve Seitz and Larry Zitnick
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Fitting Thursday, Sept 24 Kristen Grauman UT-Austin.
1 Model Fitting Hao Jiang Computer Science Department Sept 30, 2014.
Object Detection 01 – Basic Hough Transformation JJCAO.
Lecture 08 27/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Segmentation & Grouping Tuesday, Sept 23 Kristen Grauman UT-Austin.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
Local features: detection and description
CS 2750: Machine Learning Linear Regression Prof. Adriana Kovashka University of Pittsburgh February 10, 2016.
Hough Transform CS 691 E Spring Outline Hough transform Homography Reading: FP Chapter 15.1 (text) Some slides from Lazebnik.
CS 2770: Computer Vision Fitting Models: Hough Transform & RANSAC
Image Segmentation Today’s Readings Szesliski Chapter 5
Lecture 07 13/12/2011 Shai Avidan הבהרה: החומר המחייב הוא החומר הנלמד בכיתה ולא זה המופיע / לא מופיע במצגת.
Fitting: Voting and the Hough Transform
Fitting: The Hough transform
Fitting: Voting and the Hough Transform (part 2)
Fitting: Voting and the Hough Transform April 24th, 2018
Fitting Curve Models to Edges
Prof. Adriana Kovashka University of Pittsburgh October 19, 2016
Announcements Photos right now Project 3 questions
Seam Carving Project 1a due at midnight tonight.
Announcements Project 4 questions Evaluations.
Hough Transform.
Announcements Project 4 out today (due Wed March 10)
Edge Detection Today’s readings Cipolla and Gee Watt,
Announcements Project 1 is out today help session at the end of class.
Presentation transcript:

Fitting : Voting and the Hough Transform Thursday, September 17 th 2015 Devi Parikh Virginia Tech 1 Slide credit: Kristen Grauman Disclaimer: Many slides have been borrowed from Kristen Grauman, who may have borrowed some of them from others. Any time a slide did not already have a credit on it, I have credited it to Kristen. So there is a chance some of these credits are inaccurate.

Announcements PS1 due last night Project proposals –Due September 29 th –Look at class webpage for guidelines –Data collection PS2 out any time now –Due October 5 th Slides online 2 Slide credit: Adapted by Devi Parikh from Kristen Grauman

Topics overview Features & filters Grouping & fitting –Segmentation and clustering –Hough transform –Deformable contours –Alignment and 2D image transformations Multiple views and motion Recognition Video processing 3 Slide credit: Kristen Grauman

Topics overview Features & filters Grouping & fitting –Segmentation and clustering –Hough transform –Deformable contours –Alignment and 2D image transformations Multiple views and motion Recognition Video processing 4 Slide credit: Kristen Grauman

Outline What are grouping problems in vision? Inspiration from human perception –Gestalt properties Bottom-up segmentation via clustering –Algorithms: Mode finding and mean shift: k-means, mean-shift Graph-based: normalized cuts –Features: color, texture, … Quantization for texture summaries 5 Slide credit: Kristen Grauman

q Images as graphs Fully-connected graph –node (vertex) for every pixel –link between every pair of pixels, p,q –affinity weight w pq for each link (edge) w pq measures similarity –similarity is inversely proportional to difference (in color and position…) p w pq w Source: Steve Seitz 6

Measuring affinity One possibility: Small sigma: group only nearby points Large sigma: group distant points Kristen Grauman 7

Example: weighted graphs Dimension of data points : d = 2 Number of data points : N = 4 Suppose we have a 4-pixel image (i.e., a 2 x 2 matrix) Each pixel described by 2 features Feature dimension 1 Feature dimension 2 Kristen Grauman 8

for i=1:N for j=1:N D(i,j) = ||x i - x j || 2 end D(1,:)= D(:,1)= (0) Example: weighted graphs Computing the distance matrix: Kristen Grauman 9

for i=1:N for j=1:N D(i,j) = ||x i - x j || 2 end D(1,:)= D(:,1)= (0) (0) Example: weighted graphs Computing the distance matrix: Kristen Grauman 10

for i=1:N for j=1:N D(i,j) = ||x i - x j || 2 end N x N matrix Example: weighted graphs Computing the distance matrix: Kristen Grauman 11

for i=1:N for j=1:N D(i,j) = ||x i - x j || 2 end for i=1:N for j=i+1:N A(i,j) = exp(-1/(2*σ^2)*||x i - x j || 2 ); A(j,i) = A(i,j); end D A Distances  affinities Example: weighted graphs Kristen Grauman 12

D= Scale parameter σ affects affinity Distance matrix Affinity matrix with increasing σ: Kristen Grauman 13

Visualizing a shuffled affinity matrix If we permute the order of the vertices as they are referred to in the affinity matrix, we see different patterns: Kristen Grauman 14

Measuring affinity σ=.1 σ=.2 σ=1 σ=.2 Data points Affinity matrices 15 Slide credit: Kristen Grauman

Measuring affinity 40 data points 40 x 40 affinity matrix A Points x 1 …x 10 Points x 31 …x 40 x 1. x 40 x 1... x 40 1.What do the blocks signify? 2.What does the symmetry of the matrix signify? 3.How would the matrix change with larger value of σ? 16 Slide credit: Kristen Grauman

Putting it together σ=.1 σ=.2 σ=1 Data points Affinity matrices Points x 1 …x 10 Points x 31 …x 40 Kristen Grauman 17

Segmentation by Graph Cuts Break Graph into Segments –Want to delete links that cross between segments –Easiest to break links that have low similarity (low weight) similar pixels should be in the same segments dissimilar pixels should be in different segments w ABC Source: Steve Seitz q p w pq 18

Cuts in a graph: Min cut Link Cut –set of links whose removal makes a graph disconnected –cost of a cut: A B Find minimum cut gives you a segmentation fast algorithms exist for doing this Source: Steve Seitz 19

Minimum cut Problem with minimum cut: Weight of cut proportional to number of edges in the cut; tends to produce small, isolated components. [Shi & Malik, 2000 PAMI] 20 Slide credit: Kristen Grauman

Cuts in a graph: Normalized cut A B Normalized Cut fix bias of Min Cut by normalizing for size of segments: assoc(A,V) = sum of weights of all edges that touch A Ncut value small when we get two clusters with many edges with high weights, and few edges of low weight between them Approximate solution for minimizing the Ncut value : generalized eigenvalue problem. Source: Steve Seitz J. Shi and J. Malik, Normalized Cuts and Image Segmentation, CVPR, 1997Normalized Cuts and Image Segmentation 21

Example results 22 Slide credit: Kristen Grauman

Results: Berkeley Segmentation Engine 23 Slide credit: Kristen Grauman

Normalized cuts: pros and cons Pros: Generic framework, flexible to choice of function that computes weights (“affinities”) between nodes Does not require model of the data distribution Cons: Time complexity can be high –Dense, highly connected graphs  many affinity computations –Solving eigenvalue problem Preference for balanced partitions Kristen Grauman 24

Summary Segmentation to find object boundaries or mid- level regions, tokens. Bottom-up segmentation via clustering –General choices -- features, affinity functions, and clustering algorithms Grouping also useful for quantization, can create new feature summaries –Texton histograms for texture within local region Example clustering methods –K-means –Mean shift –Graph cut, normalized cuts 25 Slide credit: Kristen Grauman

Now: Fitting Want to associate a model with observed features [Fig from Marszalek & Schmid, 2007] For example, the model could be a line, a circle, or an arbitrary shape. Kristen Grauman 26

Fitting: Main idea Choose a parametric model to represent a set of features Membership criterion is not local Can’t tell whether a point belongs to a given model just by looking at that point Three main questions: What model represents this set of features best? Which of several model instances gets which feature? How many model instances are there? Computational complexity is important It is infeasible to examine every possible set of parameters and every possible combination of features Slide credit: L. Lazebnik 27

Example: Line fitting Why fit lines? Many objects characterized by presence of straight lines Wait, why aren’t we done just by running edge detection? Kristen Grauman 28

Extra edge points (clutter), multiple models: –which points go with which line, if any? Only some parts of each line detected, and some parts are missing: –how to find a line that bridges missing evidence? Noise in measured edge points, orientations: –how to detect true underlying parameters? Difficulty of line fitting Kristen Grauman 29

Voting It’s not feasible to check all combinations of features by fitting a model to each possible subset. Voting is a general technique where we let the features vote for all models that are compatible with it. –Cycle through features, cast votes for model parameters. –Look for model parameters that receive a lot of votes. Noise & clutter features will cast votes too, but typically their votes should be inconsistent with the majority of “good” features. Kristen Grauman 30

Fitting lines: Hough transform Given points that belong to a line, what is the line? How many lines are there? Which points belong to which lines? Hough Transform is a voting technique that can be used to answer all of these questions. Main idea: 1. Record vote for each possible line on which each edge point lies. 2. Look for lines that get many votes. Kristen Grauman 31

Finding lines in an image: Hough space Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: –given a set of points (x,y), find all (m,b) such that y = mx + b x y image space m b m0m0 b0b0 Hough (parameter) space Slide credit: Steve Seitz 32 Equation of a line? y = mx + b

Finding lines in an image: Hough space Connection between image (x,y) and Hough (m,b) spaces A line in the image corresponds to a point in Hough space To go from image space to Hough space: –given a set of points (x,y), find all (m,b) such that y = mx + b What does a point (x 0, y 0 ) in the image space map to? x y m b image spaceHough (parameter) space –Answer: the solutions of b = -x 0 m + y 0 –this is a line in Hough space x0x0 y0y0 Slide credit: Steve Seitz 33

Finding lines in an image: Hough space What are the line parameters for the line that contains both (x 0, y 0 ) and (x 1, y 1 )? It is the intersection of the lines b = –x 0 m + y 0 and b = –x 1 m + y 1 x y m b image spaceHough (parameter) space x0x0 y0y0 b = –x 1 m + y 1 (x 0, y 0 ) (x 1, y 1 ) 34 Slide credit: Kristen Grauman

Finding lines in an image: Hough algorithm How can we use this to find the most likely parameters (m,b) for the most prominent line in the image space? Let each edge point in image space vote for a set of possible parameters in Hough space Accumulate votes in discrete set of bins; parameters with the most votes indicate line in image space. x y m b image spaceHough (parameter) space 35 Slide credit: Kristen Grauman

Polar representation for lines : perpendicular distance from line to origin : angle the perpendicular makes with the x-axis Point in image space  sinusoid segment in Hough space [0,0] Issues with usual (m,b) parameter space: can take on infinite values, undefined for vertical lines. Image columns Image rows Kristen Grauman 36

Hough line demo a/hough.htmlhttp:// a/hough.html 37 Slide credit: Kristen Grauman

Hough transform algorithm Using the polar parameterization: Basic Hough transform algorithm 1.Initialize H[d,  ]=0 2.for each edge point I[x,y] in the image for  = [  min to  max ] // some quantization H[d,  ] += 1 3.Find the value(s) of (d,  ) where H[d,  ] is maximum 4.The detected line in the image is given by H: accumulator array (votes) d  Time complexity (in terms of number of votes per pt)? Source: Steve Seitz 38

Original imageCanny edges Vote space and top peaks Kristen Grauman 39

Showing longest segments found Kristen Grauman 40

Impact of noise on Hough Image space edge coordinates Votes  x y d What difficulty does this present for an implementation? 41 Slide credit: Kristen Grauman

Image space edge coordinates Votes Impact of noise on Hough Here, everything appears to be “noise”, or random edge points, but we still see peaks in the vote space. 42 Slide credit: Kristen Grauman

Extensions Extension 1: Use the image gradient 1.same 2.for each edge point I[x,y] in the image  = gradient at (x,y) H[d,  ] += 1 3.same 4.same (Reduces degrees of freedom) Extension 2 give more votes for stronger edges Extension 3 change the sampling of (d,  ) to give more/less resolution Extension 4 The same procedure can be used with circles, squares, or any other shape 43 Slide credit: Kristen Grauman

Extensions Extension 1: Use the image gradient 1.same 2.for each edge point I[x,y] in the image compute unique (d,  ) based on image gradient at (x,y) H[d,  ] += 1 3.same 4.same (Reduces degrees of freedom) Extension 2 give more votes for stronger edges (use magnitude of gradient) Extension 3 change the sampling of (d,  ) to give more/less resolution Extension 4 The same procedure can be used with circles, squares, or any other shape… Source: Steve Seitz 44

Hough transform for circles For a fixed radius r Circle: center (a,b) and radius r Image space Hough space Adapted by Devi Parikh from: Kristen Grauman 45 Equation of circle? Equation of set of circles that all pass through a point?

Hough transform for circles For a fixed radius r Circle: center (a,b) and radius r Image space Hough space Intersection: most votes for center occur here. Kristen Grauman 46

Hough transform for circles For an unknown radius r Circle: center (a,b) and radius r Hough space Image space b a r ? Kristen Grauman 47

Hough transform for circles For an unknown radius r Circle: center (a,b) and radius r Hough space Image space b a r Kristen Grauman 48

Hough transform for circles For an unknown radius r, known gradient direction Circle: center (a,b) and radius r Hough space Image space θ x Kristen Grauman 49

Hough transform for circles For every edge pixel (x,y) : For each possible radius value r: For each possible gradient direction θ: // or use estimated gradient at (x,y) a = x – r cos(θ) // column b = y + r sin(θ) // row H[a,b,r] += 1 end Check out online demo : Time complexity per edgel? Kristen Grauman 50

Original Edges Example: detecting circles with Hough Votes: Penny Note: a different Hough transform (with separate accumulators) was used for each circle radius (quarters vs. penny). 51 Slide credit: Kristen Grauman

Original Edges Example: detecting circles with Hough Votes: Quarter Combined detections Coin finding sample images from: Vivek Kwatra 52 Slide credit: Kristen Grauman

Example: iris detection Hemerson Pistori and Eduardo Rocha Costa Gradient+thresholdHough space (fixed radius) Max detections Kristen Grauman 53

Example: iris detection An Iris Detection Method Using the Hough Transform and Its Evaluation for Facial and Eye Movement, by Hideki Kashima, Hitoshi Hongo, Kunihito Kato, Kazuhiko Yamamoto, ACCV Kristen Grauman 54

Voting: practical tips Minimize irrelevant tokens first Choose a good grid / discretization Vote for neighbors, also (smoothing in accumulator array) Use direction of edge to reduce parameters by 1 To read back which points voted for “winning” peaks, keep tags on the votes. Too coarse Too fine ? Kristen Grauman 55

Hough transform: pros and cons Pros All points are processed independently, so can cope with occlusion, gaps Some robustness to noise: noise points unlikely to contribute consistently to any single bin Can detect multiple instances of a model in a single pass Cons Complexity of search time increases exponentially with the number of model parameters Non-target shapes can produce spurious peaks in parameter space Quantization: can be tricky to pick a good grid size Kristen Grauman 56

Generalized Hough Transform Model image Vote space Novel image x x x x x Now suppose those colors encode gradient directions… What if we want to detect arbitrary shapes? Intuition: Ref. point Displacement vectors Kristen Grauman 57

Define a model shape by its boundary points and a reference point. [Dana H. Ballard, Generalizing the Hough Transform to Detect Arbitrary Shapes, 1980] x a p1p1 θ p2p2 θ At each boundary point, compute displacement vector: r = a – p i. Store these vectors in a table indexed by gradient orientation θ. Generalized Hough Transform Offline procedure: Model shape θ θ … … … Kristen Grauman 58

p1p1 θ θ For each edge point: Use its gradient orientation θ to index into stored table Use retrieved r vectors to vote for reference point Generalized Hough Transform Detection procedure: Assuming translation is the only transformation here, i.e., orientation and scale are fixed. x θ θ Novel image θ θ … … … θ x x xx Kristen Grauman 59

Generalized Hough for object detection Instead of indexing displacements by gradient orientation, index by matched local patterns. B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004Combined Object Categorization and Segmentation with an Implicit Shape Model training image “visual codeword” with displacement vectors Source: L. Lazebnik 60

Instead of indexing displacements by gradient orientation, index by “visual codeword” B. Leibe, A. Leonardis, and B. Schiele, Combined Object Categorization and Segmentation with an Implicit Shape Model, ECCV Workshop on Statistical Learning in Computer Vision 2004Combined Object Categorization and Segmentation with an Implicit Shape Model test image Source: L. Lazebnik Generalized Hough for object detection 61

Summary Grouping/segmentation useful to make a compact representation and merge similar features –associate features based on defined similarity measure and clustering objective Fitting problems require finding any supporting evidence for a model, even within clutter and missing features. –associate features with an explicit model Voting approaches, such as the Hough transform, make it possible to find likely model parameters without searching all combinations of features. –Hough transform approach for lines, circles, …, arbitrary shapes defined by a set of boundary points, recognition from patches. Kristen Grauman 62

Questions? See you Tuesday! 63