Learning Object Representation Andrej Lúčny Department of Applied Informatics Faculty of Mathematics, Physics and Informatics Comenius University, Bratislava.

Slides:



Advertisements
Similar presentations
Hough Transforms CSE 6367 – Computer Vision Vassilis Athitsos University of Texas at Arlington.
Advertisements

Simultaneous surveillance camera calibration and foot-head homology estimation from human detection 1 Author : Micusic & Pajdla Presenter : Shiu, Jia-Hau.
A Graph based Geometric Approach to Contour Extraction from Noisy Binary Images Amal Dev Parakkat, Jiju Peethambaran, Philumon Joseph and Ramanathan Muthuganapathy.
Image Processing A brief introduction (by Edgar Alejandro Guerrero Arroyo)
嵌入式視覺 Feature Extraction
Image Segmentation Image segmentation (segmentace obrazu) –division or separation of the image into segments (connected regions) of similar properties.
Segmentation (2): edge detection
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
A Versatile Depalletizer of Boxes Based on Range Imagery Dimitrios Katsoulas*, Lothar Bergen*, Lambis Tassakos** *University of Freiburg **Inos Automation-software.
Edge detection Goal: Identify sudden changes (discontinuities) in an image Intuitively, most semantic and shape information from the image can be encoded.
Multimodal Templates for Real-Time Detection of Texture-less Objects in Heavily Cluttered Scenes Stefan Hinterstoisser, Stefan Holzer, Cedric Cagniart,
Edge detection. Edge Detection in Images Finding the contour of objects in a scene.
EE663 Image Processing Edge Detection 2 Dr. Samir H. Abdul-Jauwad Electrical Engineering Department King Fahd University of Petroleum & Minerals.
Canny Edge Detector1 1)Smooth image with a Gaussian optimizes the trade-off between noise filtering and edge localization 2)Compute the Gradient magnitude.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2005 with a lot of slides stolen from Steve Seitz and.
Segmentation (Section 10.2)
CS 376b Introduction to Computer Vision 04 / 14 / 2008 Instructor: Michael Eckmann.
Automatic Image Alignment (feature-based) : Computational Photography Alexei Efros, CMU, Fall 2006 with a lot of slides stolen from Steve Seitz and.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Distinctive Image Features from Scale-Invariant Keypoints By David G. Lowe, University of British Columbia Presented by: Tim Havinga, Joël van Neerbos.
National Center for Supercomputing Applications University of Illinois at Urbana-Champaign Image Features Kenton McHenry, Ph.D. Research Scientist.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
Introduction to Computer Vision Olac Fuentes Computer Science Department University of Texas at El Paso El Paso, TX, U.S.A.
Discrete Images (Chapter 7) Fourier Transform on discrete and bounded domains. Given an image: 1.Zero boundary condition 2.Periodic boundary condition.
HOUGH TRANSFORM Presentation by Sumit Tandon
HOUGH TRANSFORM & Line Fitting Introduction  HT performed after Edge Detection  It is a technique to isolate the curves of a given shape / shapes.
Texture. Texture is an innate property of all surfaces (clouds, trees, bricks, hair etc…). It refers to visual patterns of homogeneity and does not result.
Marco Pedersoli, Jordi Gonzàlez, Xu Hu, and Xavier Roca
Image Representation. Objectives  Bitmaps: resolution, colour depth and simple bitmap file calculations.  Vector graphics: drawing list – objects and.
September 5, 2013Computer Vision Lecture 2: Digital Images 1 Computer Vision A simple two-stage model of computer vision: Image processing Scene analysis.
Edges. Edge detection schemes can be grouped in three classes: –Gradient operators: Robert, Sobel, Prewitt, and Laplacian (3x3 and 5x5 masks) –Surface.
EDGE DETECTION USING MINMAX MEASURES SOUNDARARAJAN EZEKIEL Matthew Lang Department of Computer Science Indiana University of Pennsylvania Indiana, PA.
G52IVG, School of Computer Science, University of Nottingham 1 Edge Detection and Image Segmentation.
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Computer Vision Lecture #10 Hossam Abdelmunim 1 & Aly A. Farag 2 1 Computer & Systems Engineering Department, Ain Shams University, Cairo, Egypt 2 Electerical.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
October 7, 2014Computer Vision Lecture 9: Edge Detection II 1 Laplacian Filters Idea: Smooth the image, Smooth the image, compute the second derivative.
Edge Detection and Geometric Primitive Extraction Jinxiang Chai.
Autonomous Robots Vision © Manfred Huber 2014.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
A New Method for Crater Detection Heather Dunlop November 2, 2006.
October 1, 2013Computer Vision Lecture 9: From Edges to Contours 1 Canny Edge Detector However, usually there will still be noise in the array E[i, j],
Chapter 1: Image processing and computer vision Introduction
Image Segmentation Image segmentation (segmentace obrazu)
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Lecture 04 Edge Detection Lecture 04 Edge Detection Mata kuliah: T Computer Vision Tahun: 2010.
Digital Image Processing Lecture 17: Segmentation: Canny Edge Detector & Hough Transform Prof. Charlene Tsai.
Edge Segmentation in Computer Images CSE350/ Sep 03.
Robotics Chapter 6 – Machine Vision Dr. Amit Goradia.
Computer Vision Image Features Instructor: Dr. Sherif Sami Lecture 4.
Digital Image Processing CSC331
Course 3 Binary Image Binary Images have only two gray levels: “1” and “0”, i.e., black / white. —— save memory —— fast processing —— many features of.
Instructor: Mircea Nicolescu Lecture 10 CS 485 / 685 Computer Vision.
September 26, 2013Computer Vision Lecture 8: Edge Detection II 1Gradient In the one-dimensional case, a step edge corresponds to a local peak in the first.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Another Example: Circle Detection
Detection of discontinuity using
Mean Shift Segmentation
Fitting Curve Models to Edges
Computer Vision Lecture 9: Edge Detection II
DICOM 11/21/2018.
ECE 692 – Advanced Topics in Computer Vision
Chapter 1: Image processing and computer vision Introduction
a kind of filtering that leads to useful features
a kind of filtering that leads to useful features
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Morphological Operators
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Learning Object Representation Andrej Lúčny Department of Applied Informatics Faculty of Mathematics, Physics and Informatics Comenius University, Bratislava

Regular objects few parameters fully describe the object recognize object = specify its parameters

Regular objects e.g. Hough transform Conversion of image to parameters

Image Three arrays r[h,w], g[h,w], b[h,w], values corresponding to color ingredients

Red ingredient

Green ingredient

Blue ingredient

Intensity map bw[i,j] = 0.3*r[i,j] *g[i,j] *b[i,j]

Looking for edges: one line can be represented as a function

column intensity Edges corresponds to sharp sectors

Sobel operator a i-1,j-1 a i-1,j a i-1,j+1 a i,j-1 a i,j a i,j+1 a i+1,j-1 a i+1,j a i+1,j+1 b i,j º = dx i,j = a i-1,j+1 + 2a i,j+1 + a i+1,j+1 - a i-1,j-1 - 2a i,j-1 - a i+1,j-1 a i-1,j-1 a i-1,j a i-1,j+1 a i,j-1 a i,j a i,j+1 a i+1,j-1 a i+1,j a i+1,j+1 b i,j º = dy i,j = a i+1,j-1 + 2a i+1,j + a i+1,j+1 - a i-1,j-1 - 2a i-1,j – a i-1,j+1

Sobel operator approximated image derivation (gradient) Concerning a threshold Sobel operator indicates us edges threshold

Sobel operator |dx||dy|

Sobel operator |grad| = √ (dx + dy ) 2 2

Binary image

Thinning ?? ? ? ?? ? ? ? ??? ?? ? ?? ? ??

Hough transform Example: Circle Task: how to turn thinned image to circle parameters Paramaters: center x-coordinate center y-coordinate radius x y r

Hough transform Each parameter has a particular range E.g. for image with resolution 320 x 240 : Range of center x-coordinate is Range of center y-coordinate is Range of radius is We evaluate probability of each tupple (x,y,r) from the given range

Hough transform x y The probability P[x,y,r] is given by number of witnesses, i.e. white pixels on thinned image which would be white if one draws circle with parameters x,y,r. r

Hough transform Circle is recognized !

Irregular objects Parameters of irregular objects are not clear ! It is better look an universal method how to learn their representation

Irregular objects e.g. Dominant orientation templates How such objects are represented?

Simple but fast and efficient method Dominant orientation templates (DOT)

Motivation template image dealing with thinned edges

Edges detector Canny intensity |gradient| orientations |dx| thinned edges |dy|

Orientations (dx, dy)

Template based on the orientations

Template object is covered by non-overlapping regions

Template We concern orientation of any pixel in region, which lies on edge

Template We select set of dominant (prevailing) orientations

Template We have such set of few dominant templates for each region

Template The sets of dominant orientations form the representation of the object

How to use the template We cover image by regions and select one most dominant orientation for each region template image

How to use template Object is found if for the most of regions the dominant orientation from image is an element of the set of dominant orientations in template templateimage

Basic formal expression I – current image O – image from which the template is created c – position on the image I R – region c + R – corresponding region put to position c DO( O, R ) : set of (maximally k) dominant orientations in region R in template, i.e. on the image O do( I, R ) : dominant orientation in region R in the current image I do( X, R ) = DO( X, R ) for k = 1 δ(x) = x ? 1 : 0 where x is true or false

Does it work ? Yes, if we put region R to proper position c No, otherwise Therefore we will need more templates for various positioning of regions templates positions …

Advanced formal expression I – current image O – image from which the template is created w(O,M) – image O shifted by M c – position on the image I R – region, c + R – corresponding region put to position c DO( O, R ) : set of (maximally k) dominant orientations in region R in template, i.e. on the image O do( I, R ) : dominant orientation in region R in the current image I δ(x) = x ? 1 : 0 where x is true or false

More effective but less precise approach We can summarize more overlapping templates to one. We simply add orientations from overlapping regions. Such template must fit regardless shifting, but can detect also phantoms integrated template templates …

Formalism of the efficient approach I – current image O – image from which the template is created w(O,M) – image O shifted by M c – position on the image I R – region, c + R – corresponding region put to position c DO( O, R ) : set of (maximally k) dominant orientations in region R in template, i.e. on the image O do( I, R ) : dominant orientation in region R in the current image I δ(x) = x ? 1 : 0 where x is true or false =

More viewpoints Still one template represents the object from one viewpoint only Therefore we need to create more templates from various viewpoints Again we can integrate more templates which are similar enough to one (in the same way as shifted templates)

DOT efficiency Belonging of orientation to a template can be represented by bits 0 or 1 and all DOT can be expressed in form of bit operations Therefore DOT is very fast and running in real time

Object border DOT can provide also approximate border of the object. It is created by those edge pixels for which we have found their orientation in the template

How to get template? Scan object put to contrast scene by camera from various viewpoints (i.e. not in the natural scene but under specific conditions) or separate object from scene by another method (e.g. by movement detector)

Failure of recognition pattern Failure or creativity ? phantom

Further study Hinterstoisser, S. - Lepetit, V. - Ilic, S. - Fua, P. - Navab, N.: Dominant Orientation Templates for Real-Time Detection of Texture-Less Objects. IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR), San Francisco, California (USA), June 2010

Thank you ! Andrej Lúčny Department of Applied Informatics Faculty of Mathematics, Physics and Informatics Comenius University, Bratislava