Range Imaging Through Triangulation

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Bayesian Belief Propagation
November 12, 2013Computer Vision Lecture 12: Texture 1Signature Another popular method of representing shape is called the signature. In order to compute.
Dynamic Occlusion Analysis in Optical Flow Fields
Computer Vision Lecture 16: Region Representation
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
A New Block Based Motion Estimation with True Region Motion Field Jozef Huska & Peter Kulla EUROCON 2007 The International Conference on “Computer as a.
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
Last Time Pinhole camera model, projection
Optical Flow Methods 2007/8/9.
Probabilistic video stabilization using Kalman filtering and mosaicking.
Segmentation Divide the image into segments. Each segment:
The plan for today Camera matrix
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Motion Computing in Image Analysis
Optical Flow Estimation using Variational Techniques Darya Frolova.
COMP 290 Computer Vision - Spring Motion II - Estimation of Motion field / 3-D construction from motion Yongjik Kim.
Motion from normal flow. Optical flow difficulties The aperture problemDepth discontinuities.
Optical flow (motion vector) computation Course: Computer Graphics and Image Processing Semester:Fall 2002 Presenter:Nilesh Ghubade
EE392J Final Project, March 20, Multiple Camera Object Tracking Helmy Eltoukhy and Khaled Salama.
Mohammed Rizwan Adil, Chidambaram Alagappan., and Swathi Dumpala Basaveswara.
October 8, 2013Computer Vision Lecture 11: The Hough Transform 1 Fitting Curve Models to Edges Most contours can be well described by combining several.
Lecture 12 Stereo Reconstruction II Lecture 12 Stereo Reconstruction II Mata kuliah: T Computer Vision Tahun: 2010.
October 14, 2014Computer Vision Lecture 11: Image Segmentation I 1Contours How should we represent contours? A good contour representation should meet.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
1 TEMPLATE MATCHING  The Goal: Given a set of reference patterns known as TEMPLATES, find to which one an unknown pattern matches best. That is, each.
Optical Flow Donald Tanguay June 12, Outline Description of optical flow General techniques Specific methods –Horn and Schunck (regularization)
Edge Linking & Boundary Detection
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
December 4, 2014Computer Vision Lecture 22: Depth 1 Stereo Vision Comparing the similar triangles PMC l and p l LC l, we get: Similarly, for PNC r and.
Digital Image Processing CCS331 Relationships of Pixel 1.
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
3D Imaging Motion.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
Optical Flow. Distribution of apparent velocities of movement of brightness pattern in an image.
1 Motion Analysis using Optical flow CIS601 Longin Jan Latecki Fall 2003 CIS Dept of Temple University.
Autonomous Robots Vision © Manfred Huber 2014.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
John Morris Stereo Vision (continued) Iolanthe returns to the Waitemata Harbour.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities May 2, 2005 Prof. Charlene Tsai.
Motion / Optical Flow II Estimation of Motion Field Avneesh Sud.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
CIS 681 Distributed Ray Tracing. CIS 681 Anti-Aliasing Graphics as signal processing –Scene description: continuous signal –Sample –digital representation.
April 21, 2016Introduction to Artificial Intelligence Lecture 22: Computer Vision II 1 Canny Edge Detector The Canny edge detector is a good approximation.
Digital Image Processing Lecture 16: Segmentation: Detection of Discontinuities Prof. Charlene Tsai.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
COMP 9517 Computer Vision Motion 7/21/2018 COMP 9517 S2, 2012.
Distributed Ray Tracing
Motion Detection And Analysis
Mean Shift Segmentation
Computer Vision Lecture 12: Image Segmentation II
Common Classification Tasks
Computer Vision Lecture 5: Binary Image Processing
Computer Vision Lecture 4: Color
Fitting Curve Models to Edges
Presented by: Cindy Yan EE6358 Computer Vision
Computer Vision Lecture 9: Edge Detection II
Computer Vision Lecture 16: Texture II
Distributed Ray Tracing
Image and Video Processing
CSSE463: Image Recognition Day 30
Coupled Horn-Schunck and Lukas-Kanade for image processing
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 30
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Range Imaging Through Triangulation Obviously, this is a very slow process and not suitable for dynamic scenes. To speed things up, we can use a laser that projects a vertical line of light onto the scene. This laser rotates around its vertical axis and thereby moves the vertical line of light across the scene. Since only the horizontal positions of points vary and give us depth information, the vertical order of points is preserved. This allows us to compute the depth of each point along the vertical line without ambiguities. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Range Imaging Through Triangulation May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Range Imaging Through Triangulation Although this method is faster, it still requires a complete horizontal scan before a depth image is complete. Maybe we should use a pattern of many vertical lines that only needs to be shifted by the distance between neighboring lines? The disadvantage of this idea is that we could confuse points in different vertical lines, i.e., associate points with incorrect projection angles. However, we can overcome this problem by taking multiple images of the same scene with the pattern in the same position. In each picture, a different subset of lines is projected. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Range Imaging Through Triangulation Then each line can be uniquely identified by its pattern of presence/absence across the images. For example, for 7 vertical lines we need a series of 3 images to do this encoding: Line #1 Line #2 Line #3 Line #4 Line #5 Line #6 Line #7 Image a off on Image b Image c Obviously, with this technique we can encode up to (n – 1) lines using log2(n) images. Therefore, this method is more efficient than the single-line scanning technique. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Range Imaging Through Triangulation May 1, 2018 Computer Vision Lecture 22: Motion Analysis

The Microsoft Kinect V1 System May 1, 2018 Computer Vision Lecture 22: Motion Analysis

The Microsoft Kinect V1 System May 1, 2018 Computer Vision Lecture 22: Motion Analysis

The Microsoft Kinect V2 System The Kinect V2 uses a time-of-flight camera to measure depth. It contains a broad IR illuminator whose light is intensity modulated: The reflected light is projected through a lens onto an array of IR light sensors. For each sensor element, the phase shift between the reflected and the currently emitted light is computed. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

The Microsoft Kinect V2 System The phase shift is proportional to the distance that the light traveled and can thus be used to compute depth across the received image. Note that the measurable depth range is limited by the modulation wavelength – the system cannot decide, for example, whether 0.3 or 1.3 cycles have passed. The advantage of time-of-flight cameras is that they do not need an offset between illuminator and camera or any stereo-matching algorithm. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Now we will talk about… Motion Analysis May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Motion analysis is dealing with three main groups of motion-related problems: Motion detection Moving object detection and location. Derivation of 3D object properties. Motion analysis and object tracking combine two separate but inter-related components: Localization and representation of the object of interest (target). Trajectory filtering and data association. One or the other may be more important based on the nature of the motion application. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Differential Motion Analysis A simple method for motion detection is the subtraction of two or more images in a given image sequence. Usually, this method results in a difference image d(i, j), in which non-zero values indicate areas with motion. For given images f1 and f2, d(i, j) can be computed as follows: d(i, j) = 0 if | f1(i, j) – f2(i, j) |   = 1 otherwise May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Differential Motion Analysis May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Difference Pictures Another example of a difference picture that indicates the motion of objects ( = 25). May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Difference Pictures Applying a size filter (size 10) to remove noise from a difference picture. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Difference Pictures The differential method can rather easily be “tricked”. Here, the indicated changes were induced by changes in the illumination instead of object or camera motion (again,  = 25). May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Differential Motion Analysis In order to determine the direction of motion, we can compute the cumulative difference image for a sequence f1, …, fn of more than two images: Here, f1 is used as the reference image, and the weight coefficients ak can be used to give greater weight to more recent frames and thereby highlight the current object positions. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Cumulative Difference Image Example: Sequence of 4 images: 1 1 1 1 a2 = 1 a3 = 2 a4 = 4 Result: 7 6 4 May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Differential Motion Analysis Generally speaking, while differential motion analysis is well-suited for motion detection, it is not ideal for the analysis of motion characteristics. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Optical flow Optical flow reflects the image changes due to motion during a time interval dt which must be short enough to guarantee small inter-frame motion changes. The optical flow field is the velocity field that represents the three-dimensional motion of object points across a two-dimensional image. Optical flow computation is based on two assumptions: The observed brightness of any object point is constant over time. Nearby points in the image plane move in a similar manner (the velocity smoothness constraint). May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Optical flow May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Optical Flow The basic idea underlying most algorithms for optical flow computation: Regard image sequence as three-dimensional (x, y, t) space Determine x- and y-slopes of equal-brightness pixels along t-axis The computation of actual 3D gradients is usually quite complex and requires substantial computational power for real-time applications. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Optical Flow Let us consider the two-dimensional case (one spatial dimension x and the temporal dimension t), with an object moving to the right: t x May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Optical Flow Instead of using the gradient methods, one can simply determine those straight lines with a minimum of variation (standard deviation) in intensity along them: Flow undefined in these areas t x May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Optical flow Optical flow computation will be in error if the constant brightness and velocity smoothness assumptions are violated. In real imagery, their violation is quite common. Typically, the optical flow changes dramatically in highly textured regions, around moving boundaries, at depth discontinuities, etc. Resulting errors propagate across the entire optical flow solution. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Computer Vision Lecture 22: Motion Analysis Optical flow Global error propagation is the biggest problem of global optical flow computation schemes, and local optical flow estimation helps overcome the difficulties. However, local flow estimation can introduce large errors for homogeneous areas, i.e., regions of constant intensity. One solution of this problem is to assign confidence values to local flow estimations and consider those when integrating local and global optical flow. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Block-Matching Motion Estimation A simple method for deriving optical flow vectors is block matching. The current video frame is divided into a large number of squares (blocks). For each block, find that same-sized area in the following frame(s) with the greatest intensity correlation to it. The spatiotemporal offset between the original block and its best matches in the following frame(s) indicates likely motion vectors. The search can be limited by imposing a maximum velocity threshold. May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Feature Point Correspondence Feature point correspondence is another method for motion field construction. Velocity vectors are determined only for corresponding feature points. Object motion parameters can be derived from computed motion field vectors. Motion assumptions can help to localize moving objects. Frequently used assumptions include, like discussed above: Maximum velocity Small acceleration Common motion May 1, 2018 Computer Vision Lecture 22: Motion Analysis

Feature Point Correspondence The idea is to find significant points (interest points, feature points) in all images of the sequence—points least similar to their surroundings, representing object corners, borders, or any other characteristic features in an image that can be tracked over time. Basically the same measures as for stereo matching can be used. Point detection is followed by a matching procedure, which looks for correspondences between these points in time. The main difference to stereo matching is that now we cannot simply search along an epipolar line, but the search area is defined by our motion assumptions. The process results in a sparse velocity field. Motion detection based on correspondence works even for relatively long interframe time intervals. May 1, 2018 Computer Vision Lecture 22: Motion Analysis