Download presentation
Presentation is loading. Please wait.
Published byAgatha Doyle Modified over 9 years ago
1
Course14 Dynamic Vision
2
Biological vision can cope with changing world. ----- Moving and changing objects ----- Change illumination ----- Change View-point ----- Other changes Computer vision does in the similar ways. But we have only focus on: ----- Stationary camera, stationary objects ----- Stationary camera, moving objects ----- Moving camera, stationary objects ----- Moving camera, moving objects
3
1.Change Detection ----- Stationary camera, moving objects in scene. ----- Detect moving object from stationary scene. ----- Two or more frames of image. ----- Images are well aligned.
4
(1) Difference Pictures: DP jk (x, y)=1, if |f j (x,y) – f k (x,y)| > T DP jk (x, y)=0, otherwise PDP jk (x, y)=1, if f j (x,y) – f k (x,y) > T PDP jk (x, y)=0, otherwise NDP jk (x, y)=1, if f j (x,y) – f k (x,y) < T NDP jk (x, y)=0, otherwise Size filter should used to remove small connected components that are due to noisy pixels. Small motion cannot be detected when use size filter.
5
(2) Accumulated Difference Pictures ----- a sequence of images taken by a stationary camera at different time. ----- to detect small/slow motion objects by enhancing the difference of pixel values. ADP 0 (x,y) = 0 ADP k (x,y) = ADP k-1 (x,y) + DP 1k (x,y)
6
ADP 0 (x,y) = 0 ADP k (x,y) = ADP k-1 (x,y) + DP 1k (x,y)
7
Where (3) Time-varying Edge Detection: ----- Detect the edges of moving objects. ----- Intensity changes in space and in time make up each other.
8
Since spatial and temporal changes of image complement each other. Either Good intensity change/Slow object motion or Poor intensity change/Proper object motion can yield good result of edge detection of moving objects.
9
(4) Segment moving object with moving camera ----- Moving objects in stationary scene. ----- Translational moving camera. When camera is performing translational motion in a stationary scene. The image of the scene looks as if caming out from FOE of the image.
10
Do Ego-motion-polar transformation for every frame of images: In the new images of EMP space, stationary points are displayed only along the polar axis between the frames of image sequence, moving points are not so. Thus, moving points and stationary points can be separated.
11
2. 3D structure from motion 3D structure from images means to find the depth of scene of the corresponding feature points in image. (1) 3D structure from motion of point correspondences: (1)
12
since perspective projection We have (2) Apply to both sides of equ (2) So,
13
Once knowing point correspondences and motion parameters, 3D depth of image point can be computed. Note: the 3D structure will be up to a scale factor if the absolute value of translation is uncertain, such as being computed from two frames of images.
14
(2) 3D structure from motion of Line Correspondences. and are line correspondence over two frames of images.
15
Let the corresponding 3D line has the form:
16
(3) 3D structure from ego-motion camera. ----- Stationary scene. ----- Translationally moving camera with known speed. ----- FOE of image has been computed. ----- Determine the depth of 3D scene. Ego-motion Complex Log Mapping:
17
Where Define i.e., u = lnr v = Now, we want to find the relation of u, v and camera movement in z -direction. Remember that: (assume f = 1)
18
So,
20
We can conclude that: ----- in ego-motion complex log mapping (EMCLM) space, is only related the depth changes of scene. ----- Since dz can be measured from the speed of camera movement and du can be computed from image, the depth of scene can be found by ----- The movement of camera in z-direction does not affect the EMCLM velocity. i.e.
21
(4) Match Correspondences -----Find the corresponding image features (Points or Lines) from two images frames that correspond to the same features in 3D scene. 1) Point Matching (relaxation labeling) Given: a set of feature points in image frame 1 and another set of feature point in image 2. Find: a unique correspondences of points between the two sets of points.
22
(a) Defines object set O={o 1, o 2, …, o m } from image points of frame 1, each element is a node. Define Label set L={l 1, l 2, …, l n } from points of frame 2. (b) Establish relationship set among the nodes of object nodes, such as neighboring points. (c) Establish an initial match set M(0)={(,,… ), …… (,,… ) (,,… )}; i.e. each object node aligned to all labels.
23
(d) Establish consistency measurement of each node and its related nodes with respect to aligned pairs. e.g. consistent with,, …
24
Consistency measurement may be based on: ----- geometric relation among node in image. ----- gray level or gradient in the original image of the node. (e) Compute similarity (or disparity) of each node with respect to matched pair. e.g. where d i, l ----- disparity, such as displacement vector. Probability of match between o i and l i :
25
(f) Update match set M (k) iteratively: If the similarity of is high, encourage the match of its consistent nodes, otherwise, discourage the match of its consistent nodes. e.g. and, normalize the updated probability; where and are constant.
26
(g) Remove the match pair of small similarity (small match probability ) from match set M(k) (h) Repeat step (f) and (g) until each node has no more than one label in M (k). (2) Match Line correspondences Given: Two set of Lines in image A and image B respectively. Find: Unique correspondences of lines between image A and B.
27
(a) Matching function: ----- Position disparity. Relative position in an image: where is edge direction. Position disparity between two sub-sets of image lines from images A and B.
28
----- Orientation disparity:
29
----- Other disparity: Length of Line Intensity of original image Contrast Steepness Straightness (residues of Least squares) (b) Kernel match: Match a small sub-sets from image Lines of frames A and B, for robustness consideration of the kernel, ----- Number of lines should be no less than 3. ----- Lines should be long (stable). ----- Lines should not be paralleled. ----- Lines should be separated as much as possible.
30
Minimize the match function over selected subjects between two image frames. ----- attribute, such as position,orientation. ----- weight (c) Match expansion: Once kernel matching is completed, the line correspondences obtained will serve a reference for the match of remaining lines.
31
Choose a longest line from unmatched line of image A. Add it into the subset of matched kernel of image A, calculate match functions for every unmatched line in image B. The line of image B with minimum match function is considered a matched line. Add this matched pair of lines into matching kernel and repeat the process until no further line needs to match.
32
(3) Tracking Given: m objects moving in scene, a sequence of n image frames is taken from the scene. Find: the trajectories of each object in the image sequence. i.e. T i = ; i=1,2, … m If we use d i k to be trajectory deviation of frame k. d i k =( P i k-1 P i k, P i k P i k+1 ) We want: for each object.
33
(a) Path Coherence Function Assume in consecutive image frames that: ----- change of object location is small. ----- change of scalar velocity is small. ----- change of moving direction is small. Path coherence function can be defined as where w 1, w 2 are weight.
34
----- turning angel from P i k-1 P i k to P i k P i k+1 d 1, d 2 ----- distances between P i k-1, P i k and P i k, P i k+1 respectively. (b) Occlusion problem When occlusions occur in scene from viewpoint. Some target points may lose from some image frames. This causes problem in tracking.
35
To solve this problem, we introduce the concept of phantom point: ----- Virtual object point in frames. ----- No specified coordinates. ----- Has maximum distance d max from the consecutive frames. ----- Has a maximum value of path coherence function.
36
Thus:
37
Tracking algorithm a) Path Initialization Make initial trajectories for each feature point by linking the nearest point in consecutive frames from frame 1 to frame n. If feature points are missing in some frames, introduce phantom points. b) Exchange assignments: From frame 2 to frame n–1, for each frame: i) Perfrom forward exchange: For each feature point i of frame k, find all the combinations with possible feature j of k+1 frame within a window of d max.
38
Calculate: G ij k = [ Dev ( P i k-1, P i k, P i k+1 ) + Dev( P j k-1, P j k, P j k+1 )] –[ Dev( P i k-1, P i k, P j k+1 ) + Dev ( P j k-1, P j k, P i k+1 )] Find the i-j pair with maximum value of G max =( G ij k ). If G max > 0, exchange the trajectory assignment of i and j in k+1 frame. (ii) Perform backward exchange: For each point i of frame k, find all the combinations with possible point j of frame k-1 within a window of d max.
39
Calculate: G ij k = [ Dev( P i k-1, P i k, P i k+1 ) + Dev ( P j k-1, P j k, P j k+1 )] –[ Dev( P j k-1, P i k, P i k+1 ) + Dev ( P i k-1, P j k, P j k+1 )] Find the ij pair with G max =( G ij k ). If G max > 0,exchange trajectory assignments of i and j in frame k–1. Note: Final results of trajectories may involve phantom points.
40
The lecture is over And Thank you for your attendance
41
Hope you have leant some Basic Concept of Computer Vision in this lecture, which is helpful in you research.
Similar presentations
© 2024 SlidePlayer.com Inc.
All rights reserved.