September 10, 2013Computer Vision Lecture 3: Binary Image Processing 1Thresholding Here, the right image is created from the left image by thresholding,

Slides:



Advertisements
Similar presentations
電腦視覺 Computer and Robot Vision I
Advertisements

Document Image Processing
Course Syllabus 1.Color 2.Camera models, camera calibration 3.Advanced image pre-processing Line detection Corner detection Maximally stable extremal regions.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Computer Vision Lecture 16: Region Representation
Digital Image Processing Lecture 12: Image Topology
Each pixel is 0 or 1, background or foreground Image processing to
3-D Computer Vision CSc – Ioannis Stamos 3-D Computer Vision CSc Binary Images Horn (Robot Vision): pages
Objective of Computer Vision
Objective of Computer Vision
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
September 25, 2014Computer Vision Lecture 6: Spatial Filtering 1 Computing Object Orientation We compute the orientation of an object as the orientation.
E.G.M. PetrakisBinary Image Processing1 Binary Image Analysis Segmentation produces homogenous regions –each region has uniform gray-level –each region.
Chapter 10: Image Segmentation
Chapter 3 Binary Image Analysis. Types of images ► Digital image = I[r][c] is discrete for I, r, and c.  B[r][c] = binary image - range of I is in {0,1}
Images Course web page: vision.cis.udel.edu/cv March 3, 2003  Lecture 8.
Digital Image Processing
Machine Vision for Robots
Recap CSC508.
Chapter 9.  Mathematical morphology: ◦ A useful tool for extracting image components in the representation of region shape.  Boundaries, skeletons,
CS 6825: Binary Image Processing – binary blob metrics
Digital Image Fundamentals II 1.Image modeling and representations 2.Pixels and Pixel relations 3.Arithmetic operations of images 4.Image geometry operation.
CS 376b Introduction to Computer Vision 02 / 22 / 2008 Instructor: Michael Eckmann.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Digital Image Processing CCS331 Relationships of Pixel 1.
Morphological Image Processing
Introduction Image geometry studies rotation, translation, scaling, distortion, etc. Image topology studies, e.g., (i) the number of occurrences.
Digital Image Fundamentals Faculty of Science Silpakorn University.
Digital Image Processing CSC331 Morphological image processing 1.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Digital Image Processing CSC331 Morphological image processing 1.
CS654: Digital Image Analysis Lecture 5: Pixels Relationships.
CS654: Digital Image Analysis Lecture 4: Basic relationship between Pixels.
Low level Computer Vision 1. Thresholding 2. Convolution 3. Morphological Operations 4. Connected Component Extraction 5. Feature Extraction 1.
Digital Topology CIS 601 Fall 2004 Longin Jan Latecki.
Nottingham Image Analysis School, 23 – 25 June NITS Image Segmentation Guoping Qiu School of Computer Science, University of Nottingham
CSSE463: Image Recognition Day 9 Lab 3 (edges) due Weds, 3:25 pm Lab 3 (edges) due Weds, 3:25 pm Take home quiz due Friday, 4:00 pm. Take home quiz due.
Wonjun Kim and Changick Kim, Member, IEEE
Image Segmentation Nitin Rane. Image Segmentation Introduction Thresholding Region Splitting Region Labeling Statistical Region Description Application.
1 © 2010 Cengage Learning Engineering. All Rights Reserved. 1 Introduction to Digital Image Processing with MATLAB ® Asia Edition McAndrew ‧ Wang ‧ Tseng.
BYST Morp-1 DIP - WS2002: Morphology Digital Image Processing Morphological Image Processing Bundit Thipakorn, Ph.D. Computer Engineering Department.
Machine Vision ENT 273 Hema C.R. Binary Image Processing Lecture 3.
Digital Image Processing, 3rd ed. © 1992–2008 R. C. Gonzalez & R. E. Woods Gonzalez & Woods Chapter 11 Representation and.
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
Lecture(s) 3-4. Morphological Image Processing. 3/13/20162 Introduction ► ► Morphology: a branch of biology that deals with the form and structure of.
Morphological Image Processing (Chapter 9) CSC 446 Lecturer: Nada ALZaben.
Relationship between pixels Neighbors of a pixel – 4-neighbors (N,S,W,E pixels) == N 4 (p). A pixel p at coordinates (x,y) has four horizontal and vertical.
Digital Image Processing CCS331 Relationships of Pixel 1.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Another Example: Circle Detection
Image Sampling and Quantization
CSSE463: Image Recognition Day 9
图像处理技术讲座(3) Digital Image Processing (3) Basic Image Operations
Mean Shift Segmentation
Computer Vision Lecture 12: Image Segmentation II
Computer Vision Lecture 5: Binary Image Processing
Computer Vision Lecture 9: Edge Detection II
CS Digital Image Processing Lecture 5
Binary Image processing بهمن 92
Computer and Robot Vision I
Computer and Robot Vision I
CSSE463: Image Recognition Day 9
Digital Image Processing
CSSE463: Image Recognition Day 9
Fourier Transform of Boundaries
Computer and Robot Vision I
Computer and Robot Vision I
Computer and Robot Vision I
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 1Thresholding Here, the right image is created from the left image by thresholding, assuming that object pixels are darker than background pixels. As you can see, the result is slightly imperfect (dark background pixels).

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 2 Geometric Properties Let us say that we want to write a program that can recognize different types of tools in binary images. Then we have the following problem: The same tool could be shown in different sizes, sizes, positions, and positions, and orientations. orientations.

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 3 Geometric Properties

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 4 Geometric Properties We could teach our program what the objects look like at different sizes and orientations, and let the program search all possible positions in the input. However, that would be a very inefficient and inflexible approach. Instead, it is much simpler and more efficient to standardize the input before performing object recognition. We can scale the input object to a given size, center it in the image, and rotate it towards a specific orientation.

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 5 Computing Object Size The size A of an object in a binary image is simply defined as the number of black pixels (“1-pixels”) in the image: A is also called the zeroth-order moment of the object. In order to standardize the size of the object, we expand or shrink the object so that its size matches a predefined value.

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 6 Computing Object Position We compute the position of an object as the center of gravity of the black pixels: These are also called the first-order moments of the object. In order to standardize the position of the object, we shift its position so that it is in the center of the image.

September 17, 2012Introduction to Artificial Intelligence Lecture 4: Computer Vision II 7 Computing Object Position Centering the object within the image: Binary image: x y Compute the center of gravity: Finally, shift the image content so that the center of gravity coincides with the center of the image.

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 8 Computing Object Orientation We compute the orientation of an object as the orientation of its greatest elongation. This axis of elongation is also called the axis of second moment of the object. It is determined as the axis with the least sum of squared distances between the object points and itself. In order to standardize the orientation of the object, we rotate it around its center so that its axis of second moment is vertical.

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 9Projections Projections of a binary image indicate the number of 1-pixels in each column, row, or diagonal in that image. We refer to them as horizontal, vertical, or diagonal projections, respectively. Although projections occupy much less memory that the image they were derived from, they still contain essential information about it.

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 10Projections Knowing only the horizontal or vertical projection of an image, we can still compute the size of the object in that image. Knowing the horizontal and vertical projections of an image, we can compute the position of the object in that image. Knowing the horizontal, vertical, and diagonal projections of an image, we can compute the orientation of the object in that image.

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 11Projections Horizontal projection Vertical projection

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 12Projections Diagonal projection

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 13 Some Definitions For a pixel [i, j] in an image, … [i, j] [i-1, j] [i, j-1] [i, j+1] [i+1, j] …these are its 4-neighbors (4-neighborhood). [i, j] [i-1, j] [i, j-1] [i, j+1] [i+1, j] …these are its 8-neighbors (8-neighborhood). [i-1, j-1] [i-1, j+1] [i+1, j-1] [i+1, j+1]

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 14 Some Definitions A path from the pixel at [i 0, j 0 ] to the pixel [i n, j n ] is a sequence of pixel indices [i 0, j 0 ], [i 1, j 1 ], …, [i n, j n ] such that the pixel at [i k, j k ] is a neighbor of the pixel at [i k+1, j k+1 ] for all k with 0 ≤k ≤ n – 1. If the neighbor relation uses 4-connection, then the path is a 4-path; for 8-connection, the path is an 8- path. 4-path8-path

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 15 Some Definitions The set of all 1 pixels in an image is called the foreground and is denoted by S. A pixel p  S is said to be connected to q  S if there is a path from p to q consisting entirely of pixels of S. Connectivity is an equivalence relation, because Pixel p is connected to itself (reflexivity). If p is connected to q, then q is connected to p (commutativity). If p is connected to q and q is connected to r, then p is connected to r (transitivity).

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 16 Some Definitions A set of pixels in which each pixel is connected to all other pixels is called a connected component. The set of all connected components of –S (the complement of S) that have points on the border of an image is called the background. All other components of –S are called holes. 4-connectedness: 4 objects, 1 hole 8-connectedness: 1 object, no hole To avoid ambiguity, use 4-connectedness for foreground and 8-connectedness for background or vice versa.

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 17 Some Definitions The boundary of S is the set of pixels of S that have 4-neighbors in –S. The boundary is denoted by S’. The interior is the set of pixels of S that are not in its boundary. The interior of S is (S – S’). Region T surrounds region S (or S is inside T), if any 4-path from any point of S to the border of the picture must intersect T. original image boundary, interior, surround

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 18 Component Labeling Component labeling is one of the most fundamental operations on binary images. It is used to distinguish different objects in an image, for example, bacteria in microscopic images. We find all connected components in an image and assign a unique label to all pixels in the same component

September 10, 2013Computer Vision Lecture 3: Binary Image Processing 19 Component Labeling A simple algorithm for labeling connected components works like this: Scan the image to find an unlabeled 1-pixel and assign it a new label L Recursively assign a label L to all its 1-neighbors Stop if there are no more unlabeled 1-pixels Go to step 1. However, this algorithm is very inefficient. Let us develop a more efficient, non-recursive algorithm.