Presentation is loading. Please wait.

Presentation is loading. Please wait.

Instructor: Mircea Nicolescu Lecture 7

Similar presentations


Presentation on theme: "Instructor: Mircea Nicolescu Lecture 7"— Presentation transcript:

1 Instructor: Mircea Nicolescu Lecture 7
CS 485 / 685 Computer Vision Instructor: Mircea Nicolescu Lecture 7 Good afternoon and thank you everyone for coming. My talk today will describe the research I performed at the IRIS at USC, the object of this work being to build a computational framework that addresses the problem of motion analysis and interpretation.

2 Second Derivative in 2D: Laplacian
The Laplacian:

3 Second Derivative in 2D: Laplacian
The Laplacian can be implemented using the mask:

4 Variations of Laplacian

5 Laplacian - Example Example: mask: Example:

6 Properties of Laplacian
It is an isotropic operator. It is cheaper to implement than the gradient (one mask only). It does not provide information about edge direction. It is more sensitive to noise (differentiates twice).

7 Properties of Laplacian
How do we estimate the edge strength? Four cases of zero-crossings : {+,-} {+,0,-} {-,+} {-,0,+} Slope of zero-crossing {a, -b} is |a+b|. To mark an edge: compute slope at zero-crossing apply a threshold to slope

8 Laplacian of Gaussian (LoG)
The Marr-Hildreth edge detector Uses the Laplacian-of-Gaussian (LoG) To reduce the noise effect, the image is first smoothed with a low-pass filter. In the case of the LoG, the low-pass filter is chosen to be a Gaussian. (σ determines the degree of smoothing, mask size increases with σ)

9 Laplacian of Gaussian (LoG)
It can be shown that: (inverted LoG)

10 Laplacian of Gaussian (LoG)
Masks:

11 Laplacian of Gaussian (LoG)
Example

12 Separability Gaussian:
A 2-D Gaussian can be separated into two 1-D Gaussians Perform 2 convolutions with 1-D Gaussians k2 multiplications per pixel 2k multiplications per pixel

13 Separability Laplacian-of-Gaussian:
Requires k2 multiplications per pixel Requires 4k multiplications per pixel

14 Separability Gaussian Filtering Image g(x) g(y) +
Laplacian-of-Gaussian Filtering gyy(y) g(x) Image + gxx(x) g(y)

15 Separability of LoG Steps:

16 Laplacian of Gaussian (LoG)
Marr-Hildteth (LoG) Algorithm: Compute LoG Use one 2D filter: Use four 1D filters: Find zero-crossings from each row and column Find slope of zero-crossings Apply threshold to slope and mark edges

17 Gradient vs LoG Gradient vs. LoG – a comparison
Gradient works well when the image contains sharp intensity transitions Zero-crossings of LoG offer better localization, especially when the edges are not very sharp step edge ramp edge

18 Gradient vs LoG Disadvantage of LoG edge detection:
Does not handle corners well

19 Gradient vs LoG Disadvantage of LoG edge detection:
Does not handle corners well Why? The derivative of the Gaussian: The Laplacian of the Gaussian: (unoriented)

20 Difference of Gaussians (DoG)
The Difference-of-Gaussians (DoG) Approximates the LoG filter with a filter that is the difference of two differently sized Gaussians – a DoG filter (“Difference of Gaussians”). The image is first smoothed by convolution with a Gaussian kernel of scale 1 A second image is obtained by smoothing with a Gaussian kernel of scale 2

21 Difference of Gaussians (DoG)
Their difference is: The DoG as an operator or convolution kernel is defined as: approximation actual LoG

22 Difference of Gaussians (DoG)
σ = 1 σ = 2 difference

23 Edge Detection Using Directional Derivative
The second directional derivative This is the second derivative computed in the direction of the gradient.

24 Directional Derivative
The partial derivatives of f(x,y) will give the slope ∂f/∂x in the positive x direction and the slope ∂f /∂y in the positive y direction. We can generalize the partial derivatives to calculate the slope in any direction (i.e., directional derivative).

25 Directional Derivative
Directional derivative computes intensity changes in a specified direction. Compute derivative in direction u

26 Directional Derivative
Directional derivative is a linear combination of partial derivatives. (From vector calculus) + =

27 Directional Derivative
||u||=1 + = cosθ sinθ

28 Higher Order Directional Derivatives

29 Edge Detection Using Directional Derivative
What direction would you use for edge detection? Direction of gradient:

30 Edge Detection Using Directional Derivative
Second directional derivative along gradient direction:

31 Properties of Second Directional Derivative

32 Facet Model Assumes that an image is an array of samples of a continuous function f(x,y). Reconstructs f(x,y) from sampled pixel values. Uses directional derivatives which are computed analytically (without using discrete approximations). z=f(x,y)

33 Facet Model For complex images, f(x,y) could contain extremely high powers of x and y. Idea: model f(x,y) as a piece-wise function. Approximate each pixel value by fitting a bi-cubic polynomial in a small neighborhood around the pixel (facet).

34 Facet Model Steps Fit a bi-cubic polynomial to a small neighborhood of each pixel (2) Compute (analytically) directional derivatives in the direction of gradient. (3) Find points where the second derivative is equal to zero (this step provides smoothing too).

35 Anisotropic Filtering
Symmetric Gaussian smoothing tends to blur out edges rather aggressively. An “oriented” smoothing operator (edge-preserving smoothing) would work better: (i) Smooth aggressively perpendicular to the gradient (ii) Smooth little along the gradient Mathematically formulated using diffusion equation.

36 Anisotropic Filtering – Example
result using anisotropic filtering

37 Effect of Scale Small σ detects fine features.
original Small σ detects fine features. Large σ detects large scale edges.

38 Multi-Scale Processing
A formal theory for handling image structures at multiple scales. Determine which structures (e.g., edges) are most significant by considering the range of scales over which they occur.

39 Multi-Scale Processing
σ=1 σ=2 σ=4 σ=8 σ=16 Interesting scales: scales at which important structures are present e.g., in the image above, people can be detected at scales 1-4 39

40 Scale Space Gaussian filtered signal σ x Detect and plot the zero-crossings of a 1D function over a continuum of scales σ. Instead of treating zero- crossings at a single scale as a single point, we can now treat them at multiple scales as contours. 40

41 Scale Space Properties of scale space (assuming Gaussian smoothing):
Zero-crossings may shift with increasing scale (). Two zero-crossing may merge with increasing scale. A contour may not split in two with increasing scale. 41

42 Multi-Scale Processing

43 Multi-Scale Processing

44 Edge Detection is Just the Beginning…
image human segmentation gradient magnitude Berkeley segmentation database: 44

45 Math Review Vectors Matrices SVD Linear systems
Geometric transformations

46 n-Dimensional Vector An n-dimensional vector v is denoted as:
Its transpose vT is denoted as:

47 Vector Normalization Vector normalization  unit length vector
Example:

48 Inner (or Dot) Product Given vT = (x1, x2, . . . , xn) and
wT = (y1, y2, , yn) their dot product is defined as: (scalar) or

49 Defining Magnitude Using Dot Product
Magnitude definition: Dot product definition: Therefore:

50 Geometric Definition of Dot Product
θ corresponds to the smaller angle between u and v

51 Geometric Definition of Dot Product
The sign of u.v depends on cos(θ)

52 Vector (Cross) Product
w v u The cross product is a VECTOR! Magnitude: Orientation:

53 Vector Product Computation
w v u


Download ppt "Instructor: Mircea Nicolescu Lecture 7"

Similar presentations


Ads by Google