Shape from Contour Recover 3D information. Muller-Lyer illusion.

Slides:



Advertisements
Similar presentations
PHYSICS InClass by SSL Technologies with S. Lancione Exercise-48
Advertisements

Saito, T. and Takahashi, T. Comprehensible Rendering of 3-D Shapes Proc. of SIGGRAPH '90 Genesis of Image Space NPR.
Dynamic Occlusion Analysis in Optical Flow Fields
Lecture 18 – Recognition 1. Visual Recognition 1)Contours 2)Objects 3)Faces 4)Scenes.
CHAPTER 12 Height Maps, Hidden Surface Removal, Clipping and Level of Detail Algorithms © 2008 Cengage Learning EMEA.
Informationsteknologi Wednesday, November 7, 2007Computer Graphics - Class 51 Today’s class Geometric objects and transformations.
Chapter 23 Mirrors and Lenses.
Abstraction + Geometry = Realization (Views of Illuminated Surfaces) A (potential) application of singularity theory to vision BIRS, Banff, 30 August 2011.
CS447/ Realistic Rendering -- Solids Modeling -- Introduction to 2D and 3D Computer Graphics.
Shape from Contours and Multiple Stereo A Hierarchical, Mesh-Based Approach Hendrik Kück, Wolfgang Heidrich, Christian Vogelgsang.
Overview of Computer Vision CS491E/791E. What is Computer Vision? Deals with the development of the theoretical and algorithmic basis by which useful.
Processing Digital Images. Filtering Analysis –Recognition Transmission.
1 Introduction to 3D Imaging: Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1.
Multiple-view Reconstruction from Points and Lines
MSU CSE 803 Stockman1 CV: Perceiving 3D from 2D Many cues from 2D images enable interpretation of the structure of the 3D world producing them.
CS292 Computational Vision and Language Visual Features - Colour and Texture.
1 Chapter 21 Machine Vision. 2 Chapter 21 Contents (1) l Human Vision l Image Processing l Edge Detection l Convolution and the Canny Edge Detector l.
CSE (c) S. Tanimoto, 2007 Segmentation and Labeling 1 Segmentation and Labeling Outline: Edge detection Chain coding of curves Segmentation into.
1 Pertemuan 20 Understanding Continued Matakuliah: T0264/Intelijensia Semu Tahun: Juli 2006 Versi: 2/1.
University of British Columbia CPSC 414 Computer Graphics © Tamara Munzner 1 Shading Week 5, Wed 1 Oct 2003 recap: lighting shading.
Motion from normal flow. Optical flow difficulties The aperture problemDepth discontinuities.
1 Perceiving 3D from 2D Images How can we derive 3D information from one or more 2D images? There have been 2 approaches: 1. intrinsic images: a 2D representation.
Computer Vision Spring ,-685 Instructor: S. Narasimhan Wean 5403 T-R 3:00pm – 4:20pm Lecture #18.
Geometry Thru Composition. rectangles Using rectangles is a close likeness to Rule of Thirds. However, rather than keeping each section of your frame.
Recap Low Level Vision –Input: pixel values from the imaging device –Data structure: 2D array, homogeneous –Processing: 2D neighborhood operations Histogram.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Domain testing Tor Stålhane. Domain testing revisited We have earlier looked at domain testing as a simple strategy for selecting test cases. We will.
CS654: Digital Image Analysis Lecture 3: Data Structure for Image Analysis.
Y. Moses 11 Combining Photometric and Geometric Constraints Yael Moses IDC, Herzliya Joint work with Ilan Shimshoni and Michael Lindenbaum, the Technion.
Some Key Facts About Optimal Solutions (Section 14.1) 14.2–14.16
Intelligent Vision Systems ENT 496 Object Shape Identification and Representation Hema C.R. Lecture 7.
Verifying the “Consistency” of Shading Patterns and 3-D Structures Pawan Sinha & Edward Adelson.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Computer Vision Why study Computer Vision? Images and movies are everywhere Fast-growing collection of useful applications –building representations.
‘rules’ that we apply to visual information  to assist our organization and interpretation of the information in consistent meaningful ways.
Chapter 36 Image Formation.
1 Computational Vision CSCI 363, Fall 2012 Lecture 28 Structure from motion.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Edge and Boundary interpretation Consistent line drawing labeling via backtracking Presented by Guy Shtub.
3D Shape Inference Computer Vision No.2-1. Pinhole Camera Model the camera center Principal axis the image plane.
OOAD Unit – I OBJECT-ORIENTED ANALYSIS AND DESIGN With applications
1 Artificial Intelligence: Vision Stages of analysis Low level vision Surfaces and distance Object Matching.
Geometry Vocabulary Introduction to Classifying Angles.
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
Scene-Consistent Detection of Feature Points in Video Sequences Ariel Tankus & Yehezkel Yeshurun CVPR - Dec Tel-AvivUniversity.
McGraw-Hill/Irwin © The McGraw-Hill Companies, Inc., Table of Contents CD Chapter 14 (Solution Concepts for Linear Programming) Some Key Facts.
(c) 2000, 2001 SNU CSE Biointelligence Lab Finding Region Another method for processing image  to find “regions” Finding regions  Finding outlines.
Unit 1 Learning Outcomes 1: Describe and Identify the three undefined terms Learning Outcomes 2: Understand Angle Relationships.
Visual Perception Principles Visual perception principles are ‘rules’ that we apply to visual information to assist our organisation and interpretation.
Fundamentals of Sensation and Perception RECOGNIZING VISUAL OBJECTS ERIK CHEVRIER NOVEMBER 23, 2015.
Perceptual Constancy Module 19. Perceptual Constancy Perceiving objects as stable or constant –having consistent lightness, color, shape, and size even.
1 Perception and VR MONT 104S, Fall 2008 Lecture 12 Illusions.
Projection  If straight lines are drawn from various points on the contour of an object to meet a plane, the object is said to be projected on that plane.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
1Ellen L. Walker 3D Vision Why? The world is 3D Not all useful information is readily available in 2D Why so hard? “Inverse problem”: one image = many.
Determining 3D Structure and Motion of Man-made Objects from Corners.
Constraint Propagation Artificial Intelligence CMSC January 22, 2002.
CIS Intro to AI 1 Interpreting Line Drawings (An Introduction to Constraint Satisfaction) Mitch Marcus CIS 391 Fall 2008.
Lecture 8CSE Intro to Cognitive Science1 Interpreting Line Drawings II.
Projection and the Reality of Routines – reflections of a computational modeller Bruce Edmonds Centre for Policy Modelling Manchester Metropolitan University.
 After an image has been segmented into regions by methods such as those discussed in image segmentation chapter, the segmented pixels usually are represented.
3D Object Representations. Introduction Line and circle and polygon algorithms- represented simple and smooth object. Some Natural object are neither.
Recognizing Deformable Shapes
Fill Area Algorithms Jan
Advanced Higher Computing Based on Heriot-Watt University Scholar Materials Applications of AI – Vision and Languages 1.
CSc4730/6730 Scientific Visualization
Common Classification Tasks
3D Shape Inference Computer Vision No.2-1.
Vision: Scene Labelling
Vision: Scene Labelling
Presentation transcript:

Shape from Contour Recover 3D information

Muller-Lyer illusion

Linear perspective - Ponzo illusion

Shape 1.Shape is a stable property of objects 2.The difficulty lies in finding a representation of global shape that is general enough to deal with the wide variety of objects in the real world 3.Its most significant role is in object recognition, where geometric shape along with colour and texture provide the most significant cue to enable us to identify objects, classify what is in the image as an example of some class one has seen before

Shape from Contour  After edge detection, an important question in visual recovery is to deduce the 3-D structure of scene from its line drawing.  an inherent ambiguity exists because under perspective projection the line drawing of any scene event, for example, a depth discontinuity, can restrict the location of the event only to a narrow cone of rays (C1)

figure C1 - An infinite number of 3D drawings can give rise to the same image The goal of the shape-from-contour module is to derive information about the orientation of the various different faces.

A line drawing of a scene consisting of polyhedra. Shaded surfaces are shadows

reference  Winston, Patrick (1992) Artificial Intelligence (3 rd Edition), Reading (Mass): Addison-Wesley Pub. Co Chapter 12; pp

Symbolic Constraints and Propagation  Ambiguities – multi-interpretations for an individual components of an input – Enormous possible combinations of the components Some of them may not occur  The use of constraints – To reduce the complexity Analyse the problem domain to determine what the constraints are Apply a constraint satisfaction algorithm

Edges  An Obscuring Edge – A boundary between objects, or between object and the background  A Concave Edge - An Edge between two faces that form an acute angle when viewed from the outside of the object  A Convex Edge – An edge between two faces that form an obtuse angle when viewed from outside the object.

The Scope of Lines  At moment, not consider cracks between coplanar faces and shadow edge between shadows and the background  But the approach is extensible to handle these other types  we consider only figures composed exclusively of trihedral vertices, which are vertices at which exactly three planes come together. (figure S1)  You need to know this assumption

Some trihedral figures Figure S1 - Some trihedral figures

figure S1 - Some Nontrihedral figures

Determining the Constraints  how to recognize individual objects in a figure - our objective  to label all the lines in the figure so that we know which ones correspond to boundaries between objects  a set of four labels that can be attached to a give line. (S2)

Figure S2 - Line labelling conventions + convex line - concave line > Boundary line with interiors to the right (down) < Boundary line with interiors to the right (up)

Figure S2 - An example of line labelling

Four Trihedral Vertex Types  the number of ways of labeling a figure composed of N lines is 4 N – how to find the correct one?  the number of possible line labellings would be reduced, if – constrains on the kinds of vertices – constrains on the lines – every line must meet other lines at a vertex at each of its ends.  For the trihedral figures there are only four configurations that describe all the possible vertices. (Figure S3)

The four trihedral vertex types (S3)

Labels and Their Constraints  the maximum number of ways that each of the four types of lines might combine with other lines at a vertex  there are 208 ways to form a trihedral vertex  But, in fact, only a very small number of these labelings can actually occur in line drawings representing real physical objects (S4)

A figure occupying one octant (S4)

S5

Illustration example  Octants –  Trihedral figure may differ in the number of octants that they fill and in the position (which must be one of the unfilled octants) from which they are viewed –  Any vertex that can occur in a trihedral figure must correspond to such a division of space with some number (between one and eight) of octants filled, which is viewed from one of the unfilled octants  So to find all the vertex labelling that can occur, we need only consider all the ways of filling the octants and each of the ways of viewing those fillings, and then record the types of the vertices that we find.

Possible Trihedral Vertex Labelings(S5)  we get a complete list of the possible trihedral vertices and their labellings (figure S6)  the 208 labellings that were theoretically possible, only 18 are physically possible.  a severe constraint on the way that lines in drawings corresponding to real figures can be labeled.

S6 Label set (18)

Label set (16)

W1

Waltz Procedure 1.pick one vertex and find all the labellings that are possible for it. 2.move to an adjacent vertex and find all of its possible labellings. The line from the first vertex to the second must end up with only one label, and that label must be consistent with the two vertices it enters. 3.inconsistent ones can be eliminated. 4.Continue with another adjacent vertex. Constraints arises from this labelling and these constraints can be propagated back to vertices that have already been labelled, so the set of possible labellings for them is further reduced. 5.This process proceeds until all the vertices in the figure have been labelled. (figure W1, W12)

W12

Convergent Intelligence  Thus symbolic constraint propagation offers a plausible explanation for one kind of human information processing, as well as a good way for computer to analyse drawings. This idea suggest the following principle:  The world manifests constraints and regularities. If a computer is to exhibit intelligence, it must exploit those constraints and regularities, no matter of what the computer happens to be made.

W12

Waltz algorithm  Please see the algorithm in the extra note (the handout in class or download from the module website)  This algorithm will always find the unique, correct figure labeling if one exists. If a figure is ambiguous, however, the algorithm will terminate with at least one vertex still having more than one labeling attached to it.  was applied to a larger class of figures in which cracks and shadows might occur.  the usefulness of the algorithm increases as the size of the domain increases and thus the ratio of physically possible to the theoretically possible vertices decreases.

W10

W11

A Sample Example of the Labelling Process

Static Constraints and Dynamic Constrains  Static constrains: – do not need to be represented explicitly as part of a problem state. – They can be encoded directly into the line- labeling algorithm.  dynamic constrains – describe the current options for the labeling of each vertex. – will be represented and manipulated explicitly by the line-labeling algorithm.

Labelling and Reality  Successful labeling is a necessary condition for realizability as an object in trihedral vertex world, but not in a world that allows vertexes with more than three faces.  Successful labeling is not a sufficient condition for realizability as an object in a three-faced vertex world (M C Escher, local and global)

Extensibility  Shadow areas can be of great use in analyzing the scene that is being portrayed  When these variations are considered, there become more than eighteen allowable vertex labelings  But the ratio of physical allowable vertices to theoretically possible ones becomes even smaller than 18/208.  Thus this approach can be extended to larger domains

Many lines and junction labels are needed to handle shadows and cracks  Shadows help determine where an object rests against others (Cr1)  Concave edges often occur where two or three objects meet. It is useful to distinguish among the possibilities by combining the minus label with the one or two boundary labels that are seen when the objects are separated (Cr2)

CR1

CR2

Illumination Increase Label Count and Tightens Constraint  There are now 11 ways that any particular line may be labelled  3 2 =9 illumination combinations for each of the 11 line, giving 99 total possibilities. (Cr3)  Only 50 of these combinations are possible.

CR3

Summary  Understand the major difficulties that confront programs designed to perform perceptual tasks  describe the use of constraint satisfaction procedure as one way of surmounting some of those difficulties.  perceptual abilities are essential in the the construction of intelligent robots/systems