How to identify complex events in the real world First part : Vision –Image segmentation –Checking for basic events : touching, intersecting, moving, pointing,

Slides:



Advertisements
Similar presentations
Polygon Scan Conversion – 11b
Advertisements

Computer Graphics Lecture 3 Modeling and Structures.
Computer Graphics- SCC 342
Road-Sign Detection and Recognition Based on Support Vector Machines Saturnino, Sergio et al. Yunjia Man ECG 782 Dr. Brendan.
Discovering Lag Interval For Temporal Dependencies Larisa Shwartz Liang Tang, Tao Li, Larisa Shwartz1 Liang Tang, Tao Li
Efficient access to TIN Regular square grid TIN Efficient access to TIN Let q := (x, y) be a point. We want to estimate an elevation at a point q: 1. should.
10/10/02 (c) 2002 University of Wisconsin, CS 559 Last Time Finished viewing: Now you know how to: –Define a region of space that you wish to view – the.
I. The Problem of Molding Does a given object have a mold from which it can be removed? object not removable mold 1 object removable Assumptions The object.
Hidden Surface Removal Why make the effort?  Realistic models.  Wasted time drawing. OpenGL and HSR  OpenGL does handle HSR using the depth buffer.
CHAPTER 12 Height Maps, Hidden Surface Removal, Clipping and Level of Detail Algorithms © 2008 Cengage Learning EMEA.
UNC Chapel Hill M. C. Lin Polygon Triangulation Chapter 3 of the Textbook Driving Applications –Guarding an Art Gallery –3D Morphing.
I. The Problem of Molding Does a given object have a mold from which it can be removed? object not removable mold 1 object removable Assumptions The object.
Computer Graphics Viewing.
CMPE 466 COMPUTER GRAPHICS Chapter 8 2D Viewing Instructor: D. Arifler Material based on - Computer Graphics with OpenGL ®, Fourth Edition by Donald Hearn,
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
Informationsteknologi Wednesday, November 7, 2007Computer Graphics - Class 51 Today’s class Geometric objects and transformations.
Tutorial 2 – Computational Geometry
1Notes  Assignment 0 marks should be ready by tonight (hand back in class on Monday)
COMP322/S2000/L181 Pre-processing: Smooth a Binary Image After binarization of a grey level image, the resulting binary image may have zero’s (white) and.
Clustering Color/Intensity
Robot Homing Workshop.
Highlights Lecture on the image part (10) Automatic Perception 16
Camera Calibration CS485/685 Computer Vision Prof. Bebis.
Brute-Force Triangulation
Knowledge Systems Lab JN 8/24/2015 A Method for Temporal Hand Gesture Recognition Joshua R. New Knowledge Systems Laboratory Jacksonville State University.
1 CSCE 441: Computer Graphics Scan Conversion of Polygons Jinxiang Chai.
CS 325 Introduction to Computer Graphics 03 / 03 / 2010 Instructor: Michael Eckmann.
CS 450: Computer Graphics REVIEW: OVERVIEW OF POLYGONS
Google Sketchup Lab Mr. Garner Tech Ed Lime Kiln MS.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
CS 325 Introduction to Computer Graphics 03 / 08 / 2010 Instructor: Michael Eckmann.
Generalized Hough Transform
CS 325 Introduction to Computer Graphics 03 / 22 / 2010 Instructor: Michael Eckmann.
1 Visiblity: Culling and Clipping Computer Graphics COMP 770 (236) Spring 2009 January 21 & 26: 2009.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
COMP322/S2000/L171 Robot Vision System Major Phases in Robot Vision Systems: A. Data (image) acquisition –Illumination, i.e. lighting consideration –Lenses,
Basic Perspective Projection Watt Section 5.2, some typos Define a focal distance, d, and shift the origin to be at that distance (note d is negative)
Cartographic Objects Digital Images of a Map Vector Data Model Raster Data Model.
Course 8 Contours. Def: edge list ---- ordered set of edge point or fragments. Def: contour ---- an edge list or expression that is used to represent.
1 11. Polygons Polygons 2D polygons ( 다각형 ) –Polygon sides are all straight lines lying in the same plane 3D polyhedra ( 다면체 )  chap. 12 –Polyhedra.
CS COMPUTER GRAPHICS LABORATORY. LIST OF EXPERIMENTS 1.Implementation of Bresenhams Algorithm – Line, Circle, Ellipse. 2.Implementation of Line,
High Resolution Surface Reconstruction from Overlapping Multiple-Views
1 Perception and VR MONT 104S, Fall 2008 Lecture 20 Computer Graphics and VR.
1 CSCE 441: Computer Graphics Scan Conversion of Polygons Jinxiang Chai.
Tracking Groups of People for Video Surveillance Xinzhen(Elaine) Wang Advisor: Dr.Longin Latecki.
Visible surface determination. Problem outline Given a set of 3D objects and a viewing specification, we wish to determine which lines or surfaces are.
Representing Moving Images with Layers J. Y. Wang and E. H. Adelson MIT Media Lab.
8.1 Angle measures of a Ploygon. Polygons Polygons are closed figures Made of strait segment Segments only intersect at endpoints forming vertices.
Polygon Triangulation
3D Ojbects: Transformations and Modeling. Matrix Operations Matrices have dimensions: Vectors can be thought of as matrices: v=[2,3,4,1] is a 1x4 matrix.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Electronic Visualization Laboratory University of Illinois at Chicago “Fast And Reliable Space Leaping For Interactive Volume Rendering” by Ming Wan, Aamir.
Course : T Computer Vision
A Plane-Based Approach to Mondrian Stereo Matching
Computer Graphics Clipping.
Computational Geometry
1.1: Objectives Properties of Real Numbers
Introduction to Polygons
Computer Graphics CC416 Week 13 Clipping.
Query Processing in Databases Dr. M. Gavrilova
3D Rendering Pipeline Hidden Surface Removal 3D Primitives
Polygon Triangulation
Digital Media Dr. Jim Rowan ITEC 2110.
I. The Problem of Molding
Lecture 13 Clipping & Scan Conversion
Copyright © Cengage Learning. All rights reserved.
© University of Wisconsin, CS559 Fall 2004
Rasterizing Polygons Lecture 29 Wed, Dec 7, 2005.
Finding Basic Shapes Hough Transforms
Presentation transcript:

How to identify complex events in the real world First part : Vision –Image segmentation –Checking for basic events : touching, intersecting, moving, pointing, … Second part : –Defining complex events in terms of basic events –Identifying the complex events

Low level vision : overview Properties of individual objects : - xpos - ypos - bounding box - area - color - ratio - movement vector - bounding polygon (*) - #edges / angles (*) - orientation (*) - type {square, rectangle, circle, triangle} (*) Relations between objects : - touching (*) - intersecting (*) - pointing (*)

Fig.1 : real world imageFig.2 : bounding boxesFig.3 : bounding polygons 1.Acquisition of real world image. 2.Segmentation with region growing  all basic features 3.Bounding polygon  more advanced features Components of low level vision.

Segmentation : region growing Based on Tony’s pseudocode : segmentID = 0 for all coordinates x and y if ([x,y] not visited AND interesting I(x,y)) enqueue [x,y] while queue not empty p  dequeue if interesting p add p to segment[segmentID] enqueue interesting neigbours of p mark p as visited segmentID  segmentID + 1 mark [x,y] as visited

Segmentation Step parameter : check every nth pixel in stead of every pixel. A pixel is “interesting” if it’s not very white and has more or less the same color as it’s neighbour. Pixels are marked “visited” by coloring them white, so there is no extra datastructure for these pixels. Individual pixels of a segment are not stored in a datastructure. Only the coordinates of the bounding-box are stored.

Matching Every time a frame is grabbed, the new frame is matched against the previous frame. An object in both frames is the same if the bounding box is more or less the same. Handle new / lost objects. Calculate motion vectors for each object. Calculate an average motion vector. If the differences between the average and the individual motion vectors are very small, then the camera is moving.

Matching Problems Whenever an object is lost there should be some kind of recovery strategy. –When an object has moved too much, it is lost. It might be useful checking other features (color?) to find the lost object. –When one object splits a second object, the match fails, 2 objects are lost and 3 new objects are found. –When a new object enters, a new object is found. Next frame, it has become bigger, so the match fails, a new object is found, …

Finding bounding polygons 1.Take the bounding box from the segmentation algorithm. 2.Traverse each of the sides of the bbox from the left and the right. Mark the endpoints of the segment (first pixel close enough to the average color of the segment) 3.Establish a polygon with the endpoints. Simplify it : remove angles of 180° and merge points too close to each other.

Problems with bounding polygons When two objects with the same color are within each others bounding boxes … Possible solution : Store all pixels of a segment. Whenever a bounding point is found, check if it’s really inside the segment.

Polygon classification Classification based on properties of edges and angles. (“Every polygon with three edges is a triangle” etc.) First test results :

Towards high level vision Expressing more complex events in terms of basic events. (define (pick-up x y) (exists (j) (and (during j (move(hand x))) (during j (contacts(hand x) y)) (during j (attached(hand x) y)) (during j (move y)))))

Basic Events : very straightforward definitions ! New / lost objects. Movement : threshold on the movement vector. Touching : 2 objects touch when their bounding polygons touch or are very close to each other. Intersecting : 2 objects intersect when their bounding polygons are intersecting. Support : An object supports another when the removal of the first causes the second to fall. Pointing : An object is pointing at an other object if the first object is more or less oriented towards the second object.

Basic events : Support The Support event requires explicit reasoning and is very hard to check. Simplification : All objects on the table are supported by the table. An object on top of another object is supported by the lower one. An object in a robot hand is supported by the robot. An object is in the robot hand when the object touches the hand and is attached to it (= intersection of polygons) When objects always stay on the table, the support event becomes trivial.

Building complex events A rule defines how a complex event is assembled from basic events. The Event manager has a memory of rule phases. A rule-phase is an entry of the form (rulename, objectbinding1, objectbinding2, phase). (pick-up, 3,,2) means that object 3 is possibly picking up another unknown object. The phase indicates how many subrules have successfully been evaluated.

Evaluating complex events Whenever a “lost object” event occurs, all rulephases with an object-binding to that object, are removed from memory. Every frame we check if we can create new rulephases. Only new or moved objects can cause a new rulephase to be created. When there are no new objects and no object has moved, there simply cannot be new events. Every frame we try to advance all rulephases in memory and extend their object bindings.

Evaluating complex events The extension of rulephases may require other events (touching, intersecting, pointing, …) to be checked. Extension : The evaluation of one rule may force the evaluation of another rule : subrules can be rules themselves. Finally, we need to check for rulephases that were evaluated successfully.