1 Chapter 21 Machine Vision. 2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l.

Slides:



Advertisements
Similar presentations
QR Code Recognition Based On Image Processing
Advertisements

November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Dynamic Occlusion Analysis in Optical Flow Fields
October 2, 2014Computer Vision Lecture 8: Edge Detection I 1 Edge Detection.
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Computer Vision Detecting the existence, pose and position of known objects within an image Michael Horne, Philip Sterne (Supervisor)
Quadtrees, Octrees and their Applications in Digital Image Processing
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Edge and Corner Detection Reading: Chapter 8 (skip 8.1) Goal: Identify sudden changes (discontinuities) in an image This is where most shape information.
Lecture 4 Edge Detection
Region Segmentation. Find sets of pixels, such that All pixels in region i satisfy some constraint of similarity.
Announcements Mailing list: –you should have received messages Project 1 out today (due in two weeks)
Segmentation Divide the image into segments. Each segment:
Image Segmentation. Introduction The purpose of image segmentation is to partition an image into meaningful regions with respect to a particular application.
Edge Detection Today’s reading Forsyth, chapters 8, 15.1
Visual Information Systems Image Content. Visual cues to recover 3-D information There are number of cues available in the visual stimulus There are number.
Lecture 2: Image filtering
Edge Detection Today’s readings Cipolla and Gee –supplemental: Forsyth, chapter 9Forsyth Watt, From Sandlot ScienceSandlot Science.
Information that lets you recognise a region.
1 Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #18.
November 29, 2004AI: Chapter 24: Perception1 Artificial Intelligence Chapter 24: Perception Michael Scherger Department of Computer Science Kent State.
September 10, 2012Introduction to Artificial Intelligence Lecture 2: Perception & Action 1 Boundary-following Robot Rules 1  2  3  4  5.
Recap Low Level Vision –Input: pixel values from the imaging device –Data structure: 2D array, homogeneous –Processing: 2D neighborhood operations Histogram.
MASKS © 2004 Invitation to 3D vision Lecture 3 Image Primitives andCorrespondence.
Copyright © 2012 Elsevier Inc. All rights reserved.
Neighborhood Operations
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
Shape from Contour Recover 3D information. Muller-Lyer illusion.
Segmentation Course web page: vision.cis.udel.edu/~cv May 7, 2003  Lecture 31.
University of Kurdistan Digital Image Processing (DIP) Lecturer: Kaveh Mollazade, Ph.D. Department of Biosystems Engineering, Faculty of Agriculture,
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
Quadtrees, Octrees and their Applications in Digital Image Processing.
EDGE DETECTION IN COMPUTER VISION SYSTEMS PRESENTATION BY : ATUL CHOPRA JUNE EE-6358 COMPUTER VISION UNIVERSITY OF TEXAS AT ARLINGTON.
November 13, 2014Computer Vision Lecture 17: Object Recognition I 1 Today we will move on to… Object Recognition.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Instructor: S. Narasimhan
Damien Blond Alim Fazal Tory Richard April 11th, 2000 PERCEPTION.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Kylie Gorman WEEK 1-2 REVIEW. CONVERTING AN IMAGE FROM RGB TO HSV AND DISPLAY CHANNELS.
Autonomous Robots Vision © Manfred Huber 2014.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Perception Problems Vision understanding, natural language processing and speech recognition all have several things in common –each problem is so complex.
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Machine Vision Edge Detection Techniques ENT 273 Lecture 6 Hema C.R.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Digital Image Processing
Chapter 24: Perception April 20, Introduction Emphasis on vision Feature extraction approach Model-based approach –S stimulus –W world –f,
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
Fourier Transform: Real-World Images
Common Classification Tasks
Range Imaging Through Triangulation
Computer Vision Lecture 9: Edge Detection II
Digital Image Processing
What I learned in the first 2 weeks
Object Recognition Today we will move on to… April 12, 2018
Spatial operations and transformations
Announcements Project 4 out today Project 2 winners help session today
Image Segmentation.
Edge Detection Today’s readings Cipolla and Gee Watt,
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Spatial operations and transformations
Presentation transcript:

1 Chapter 21 Machine Vision

2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l Segmentation l Classifying Edges

3 Chapter 21 Contents (2) l Using Texture l Structural Texture Analysis l Determining Shape and Orientation from Texture l Interpreting Motion l Making Use of Vision l Face Recognition

4 Human Vision

5 Image Processing l Image Processing consists of the following components: nImage capture nEdge detection nSegmentation nThree dimensional segmentation nRecognition and analysis l Image capture is the process of converting a visual scene into data that can be processed.

6 Edge Detection (1) l Edge detection is the first phase in processing image data. l The following images show a photograph of a hand and the edges detected in this image.

7 Edge Detection (2) l Every edge represents some kind of discontinuity in the image. l Most edges are depth discontinuities. l Other discontinuities are: nSurface orientation discontinuities nSurface reflectance discontinuities nIllumination discontinuities (shadows)

8 Convolution and the Canny Edge Detector (1) l One method of edge detection is to differentiate the image: nDiscontinuities will have the highest differentials. l This does not work well with noisy images l Convolution is better for such images.

9 Convolution and the Canny Edge Detector (2) l The convolution of two discrete functions f(a, b) and g(a, b) is defined as follows: l The convolution of continuous functions f(a,b) and g(a,b) is defined as follows: l An image can be smoothed, to eliminate noise, by convolving it with the Gaussian function:

10 Convolution and the Canny Edge Detector (3) l The image, after smoothing, can be differentiated to detect the edges. l The peaks in the differential correspond to the edges in the original image. l In fact, the same result can be obtained by convolving the image with the differential of G:

11 Convolution and the Canny Edge Detector (4) l This method only works with one-dimensional edges. To detect two dimensional egdes we convolve with two filters, and square and add the results: l where I(x, y) is the value of the pixel at location (x, y) in the image. Filter 1 is G’ σ (x) G σ (y) Filter 2 is G’ σ (y) G σ (x) l This is the Canny edge detector.

12 Segmentation l Once the edges have been detected, this can be used to segment the image. l Segmentation involves dividing the image into areas which do not contain edges. l These areas will not have sharp changes in colour or shading. l In fact, edge detection will not always entirely segment an image. l Another method is thresholding. l Thresholding involves joining pixels together that have similar colors.

13 Classifying Edges (1) l After extracting edges, it is useful to classify the edges. nA convex edge is an edge between two faces that are at an angle of more than 180° from each other. nA concave edge is an edge between two faces that are at an angle of less than 180° from each other. nAn occluding edge is a depth discontinuity.

14 Classifying Edges (2) l The following diagram shows a line drawing that has had all its edges classified as convex (+), concave (-) or occluding (arrow):

15 Classifying Edges (3) l Most vertices represent a meeting of three faces. l There are only sixteen possible ways these trihedral vertices can be labeled:

16 Classifying Edges (4) l The Waltz algorithm uses this constraint. l This works as follows: nThe first edge that is visited is marked with all possible labels. nThen the algorithm moves onto an adjacent edge, and attempts to label it. nIf an edge cannot be labeled, the algorithm backtracks. nThus, depth-first search is applied to attempt to find a consistent labeling for the whole image.

17 Using Texture (1) l Textures, such as these, tell us a great deal about images, including: nOrientation nShape l We can also determine what the pictures on the right are showing, simply by their textures.

18 Using Texture (2) l A statistical method of determining texture is to use co-occurrence matrices. l D(m, n) is the number of pairs of pixels in our picture, P, for which: P(i, j) = m P(i + δi, j + δj) = n i and j are pixels in P, and δi and δj are small increments. l D defines how likely it is that any two pixels a particular distance apart (δi and δj) will have a particular pair of values. l The co-occurrence matrix is defined as: C = D + D T where D T is the transposition of D.

19 Structural Texture Analysis l The structural approach treats textures as being made up of individual units called texels. l In this image, each tile is a texel. l Texel analysis involves searching for repeated patterns and shapes within an image.

20 Determining Shape and Orientation from Texture (1) l These are good examples of pictures where texture helps to determine the shape and orientation. l Note that the second image, although it is a flat, two dimensional shape, looks like a sphere. l This is because this is the only sensible way for our brains to explain the texture.

21 Determining Shape and Orientation from Texture (2) l One way to determine orientation is to assume that each texel is flat. l Thus the extent of distortion of the shape of the texel will tell us what angle it is being viewed at. l Orientation involves determining slant (σ) and tilt (τ), as shown here:

22 Interpreting Motion l Detecting motion is vital in mammalian vision. l Similarly, agents that interact with the real world need to be able to interpret motion. l We are interested in two types of motion: nActual motion of other objects nApparent motion caused by the motion of the agent.

23 Interpreting Motion (1) l Detecting motion is vital in mammalian vision. l Similarly, agents that interact with the real world need to be able to interpret motion. l We are interested in two types of motion: nActual motion of other objects nApparent motion caused by the motion of the agent. l This apparent motion is known as optical flow, and the vectors that define the apparent motion are the motion field.

24 Interpreting Motion (2) l The arrows on this photo show the motion field.

25 Making Use of Vision l What purpose does machine vision really serve? l It can be used to control mobile agents or unmanned vehicles such as those sent to other planets. l Another purpose is to identify objects in the agent’s environment. nIf the agent is to interact with these objects (pick them up, sit on them, talk to them) it must be able to recognize that they are there.

26 Face Recognition l An example of a problem that humans are extremely good at solving, but computers are very bad at. l Faces must be recognized in varying lighting conditions, from different angles and distances, and with other variable elements such as facial hair, glasses, hats and natural aging. l Methods used in face recognition vary, but many involve principle component analysis: nIdentifying those features that most differentiate one face from another, and treating those as a vector which is to be compared.