1 Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or.

Slides:



Advertisements
Similar presentations
Kien A. Hua Division of Computer Science University of Central Florida.
Advertisements

Automatic Video Shot Detection from MPEG Bit Stream Jianping Fan Department of Computer Science University of North Carolina at Charlotte Charlotte, NC.
Finding Structure in Home Videos by Probabilistic Hierarchical Clustering Daniel Gatica-Perez, Alexander Loui, and Ming-Ting Sun.
Introduction To Tracking
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
MPEG-4 Objective Standardize algorithms for audiovisual coding in multimedia applications allowing for Interactivity High compression Scalability of audio.
Authers : Yael Pritch Alex Rav-Acha Shmual Peleg. Presenting by Yossi Maimon.
December 5, 2013Computer Vision Lecture 20: Hidden Markov Models/Depth 1 Stereo Vision Due to the limited resolution of images, increasing the baseline.
Computer Vision Optical Flow
Motion Tracking. Image Processing and Computer Vision: 82 Introduction Finding how objects have moved in an image sequence Movement in space Movement.
CSE 803: image compression Why? How much? How? Related benefits and liabilities.
Segmentation Divide the image into segments. Each segment:
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman ICCV 2003 Presented by: Indriyati Atmosukarto.
CSSE463: Image Recognition Day 30 Due Friday – Project plan Due Friday – Project plan Evidence that you’ve tried something and what specifically you hope.
Video Google: Text Retrieval Approach to Object Matching in Videos Authors: Josef Sivic and Andrew Zisserman University of Oxford ICCV 2003.
CS 376b Introduction to Computer Vision 04 / 01 / 2008 Instructor: Michael Eckmann.
Stockman MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
1 Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or.
A fuzzy video content representation for video summarization and content-based retrieval Anastasios D. Doulamis, Nikolaos D. Doulamis, Stefanos D. Kollias.
Object Tracking for Retrieval Application in MPEG-2 Lorenzo Favalli, Alessandro Mecocci, Fulvio Moschetti IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR.
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
Video Trails: Representing and Visualizing Structure in Video Sequences Vikrant Kobla David Doermann Christos Faloutsos.
MSU Fall Computing Motion from Images Chapter 9 of S&S plus otherwork.
Information Extraction from Cricket Videos Syed Ahsan Ishtiaque Kumar Srijan.
TP15 - Tracking Computer Vision, FCUP, 2013 Miguel Coimbra Slides by Prof. Kristen Grauman.
CSSE463: Image Recognition Day 30 This week This week Today: motion vectors and tracking Today: motion vectors and tracking Friday: Project workday. First.
Active Vision Key points: Acting to obtain information Eye movements Depth from motion parallax Extracting motion information from a spatio-temporal pattern.
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Player Action Recognition in Broadcast Tennis Video with Applications to Semantic Analysis of Sport Game Guangyu Zhu, Changsheng Xu Qingming Huang, Wen.
1 Motion Estimation Readings: Ch 9: plus papers change detection optical flow analysis Lucas-Kanade method with pyramid structure Ming Ye’s improved.
Computer Vision, Robert Pless Lecture 11 our goal is to understand the process of multi-camera vision. Last time, we studies the “Essential” and “Fundamental”
Understanding The Semantics of Media Chapter 8 Camilo A. Celis.
Motion Segmentation By Hadas Shahar (and John Y.A.Wang, and Edward H. Adelson, and Wikipedia and YouTube) 1.
Recognizing Action at a Distance Alexei A. Efros, Alexander C. Berg, Greg Mori, Jitendra Malik Computer Science Division, UC Berkeley Presented by Pundik.
December 9, 2014Computer Vision Lecture 23: Motion Analysis 1 Now we will talk about… Motion Analysis.
Motion Analysis using Optical flow CIS750 Presentation Student: Wan Wang Prof: Longin Jan Latecki Spring 2003 CIS Dept of Temple.
3D Imaging Motion.
Segmentation of Vehicles in Traffic Video Tun-Yu Chiang Wilson Lau.
Content-Based Image Retrieval QBIC Homepage The State Hermitage Museum db2www/qbicSearch.mac/qbic?selLang=English.
Visual Computing Computer Vision 2 INFO410 & INFO350 S2 2015
 Present by 陳群元.  Introduction  Previous work  Predicting motion patterns  Spatio-temporal transition distribution  Discerning pedestrians  Experimental.
Colour and Texture. Extract 3-D information Using Vision Extract 3-D information for performing certain tasks such as manipulation, navigation, and recognition.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
Motion Estimation using Markov Random Fields Hrvoje Bogunović Image Processing Group Faculty of Electrical Engineering and Computing University of Zagreb.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
CS654: Digital Image Analysis
1 Motion Estimation Readings: Ch 9: plus papers change detection optical flow analysis Lucas-Kanade method with pyramid structure Ming Ye’s improved.
1 Review and Summary We have covered a LOT of material, spending more time and more detail on 2D image segmentation and analysis, but hopefully giving.
Processing Images and Video for An Impressionist Effect Automatic production of “painterly” animations from video clips. Extending existing algorithms.
CSSE463: Image Recognition Day 29
Automatic Video Shot Detection from MPEG Bit Stream
COMP 9517 Computer Vision Motion and Tracking 6/11/2018
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
COMP 9517 Computer Vision Motion 7/21/2018 COMP 9517 S2, 2012.
Robust Visual Motion Analysis: Piecewise-Smooth Optical Flow
Range Imaging Through Triangulation
CSSE463: Image Recognition Day 29
Motion Estimation Thanks to Steve Seitz, Simon Baker, Takeo Kanade, and anyone else who helped develop these slides.
Image and Video Processing
Image and Video Processing
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
Coupled Horn-Schunck and Lukas-Kanade for image processing
CSSE463: Image Recognition Day 30
CSSE463: Image Recognition Day 29
CSSE463: Image Recognition Day 30
Computer and Robot Vision I
CSSE463: Image Recognition Day 29
Presentation transcript:

1 Motion in 2D image sequences Definitely used in human vision Object detection and tracking Navigation and obstacle avoidance Analysis of actions or activities Segmentation and understanding of video sequences

2 Change detection for surveillance Video frames: F1, F2, F3, … Objects appear, move, disappear Background pixels remain the same Subtracting image Fm from Fn should show change in the difference Change in background is only noise Significant change at object boundaries

3 Person detected entering room Pixel changes detected as difference regions (components). Regions are (1) person, (2) opened door, and (3) computer monitor. System can know about the door and monitor. Only the person region is “unexpected”.

4 Change detection via image subtraction for each pixel [r,c] if (|I1[r,c] - I2[r,c]| > threshold) then Iout[r,c] = 1 else Iout[r,c] = 0 Perform connected components on Iout. Remove small regions. Perform a closing with a small disk for merging close neighbors. Compute and return the bounding boxes B of each remaining region.

5 Change analysis Known regions are ignored and system attends to the unexpected region of change. Region has bounding box similar to that of a person. System might then zoom in on “head” area and attempt face recognition.

6 Some cases of motion sensing Still camera, single moving object, constant background Still camera, several moving objects, constant background Moving camera, relatively constant scene Moving camera, several moving objects

7 Approach to motion analysis Detect regions of change across video frames Ft and F(t+1) Correlate region features to define motion vectors Analyze motion trajectory to determine kind of motion and possibly identify the moving object

8 Flow vectors resulting from camera motion Zooming a camera gives results similar to those we see when we move forward or backward in a scene. Panning effects are similar to what we see when we turn.

9 Image flow field The image flow field (or motion field) is a 2D array of 2D vectors representing the motion of 3D scene points in 2D space. image at time t image at time t +  (sparse) flow field What kind of points are easily tracked?

10 The Decathlete Game (Left) Man makes running movements with arms. (Right) Display shows his avatar running. Camera controls speed and jumping according to his movements.

11 Program interprets motion (a)Opposite flow vectors means RUN; speed determined by vector magnitude. (b)Upward flow means JUMP. (c) Downward flow means COME DOWN.

12 Flow vectors from point matches Significant neighborhoods are matched from frame k to frame k+1. Three similar sets of such vectors correspond to three moving objects.

13 Examples: Chris Bowron First Image Second Image Interesting Points Motion Vectors Clusters

14 First Image Second Image Interesting Points MotionVectors Clusters Two aerial phots of a city: Chris Bowron

15 Requirements for interest points Have unique multidirectional energy Detected and located with confidence Edge detector not good (1D energy only) Corner detector is better (2D constraint) Autocorrelation can be used for matching neighborhood from frame k to one from frame k+1

16 Interest point detection method Examine every K x K image neighborhd. Find intensity variance in all 4 directions. Interest value is MINIMUM of variances. Consider 4 “1D signals” – horizontal, vertical, diagonal 1, and diagonal 2. Interest value is the minimum variance of these.

17 Interest point detection algorithm for window of size w x w for each pixel [r,c] in image I if I[r,c] is not a border pixel and interest_operator(I,r,c,w)  threshold then add [(r,c),(r,c)] to set of interest points procedure interest_operator(I, r, c, w) { v1 = intensity variance of horizontal pixels I[r,c-w]…I[r,c+w] v2 = intensity variance of vertical pixels I[r-w,c]…I[r+w,c] v3 = intensity variance of diagonal pixels I[r-w,c-w]…I[r+w,c+w] v4 = intensity variance of diagonal pixels I[r-w,c+w]…I[r+w,c-w] return minimum(v1, v2, v3, v4) } The second (r,c) is a placeholder for the end point of a vector.

18 Matching interest points Can use normalized cross correlation or image difference. P 169 Cross Correlation

19 Moving robot sensor 2 views and edges. Bottom right shows overlaid edge images.

20 MPEG Motion Compression Some frames are encoded in terms of others. Independent frame encoded as a still image using JPEG Predicted frame encoded via flow vectors relative to the independent frame and difference image. Between frame encoded using flow vectors and independent and predicted frame.

21 MPEG compression method F1 is independent. F4 is predicted. F2 and F3 are between. Each block of P is matched to its closest match in P and represented by a motion vector and a block difference image. Frames B1 and B2 between I and P are represented by two motion vectors per block referring to blocks in F1 and F4.

22 Example of compression Assume frames are 512 x 512 bytes, or 32 x 32 blocks of size 16 x 16 pixels. Frame A is ¼ megabytes before JPEG Frame B uses 32 x 32 =1024 motion vectors, or 2048 bytes only if delX and delY are represented as 1 byte integers.

23 Computing image flow Goal is to compute a dense flow field with a vector for every pixel. We have already discussed how to do it for interest points with unique neighborhoods. Can we do it for all image points?

24 Computing image flow Example of image flow: a brighter triangle moves 1 pixel upward from time t1 to time t2. Background intensity is 3 while object intensity is 9.

25 Optical flow Optical flow is the apparent flow of intensities across the retina due to motion of objects in the scene or motion of the observer. We can use a continuous mathematical model and attempt to compute a spatio-temporal gradient at each image point I [x, y, t], which represents the optical flow.

26 Assumptions for the analysis Object reflectivity does not change t1 to t2 Illumination does not change t1 to t2 Distances between object and light and camera do not change significantly t1 to t2 Assume continuous intensity function of continuous spatial parameters x,y Assume each intensity neighborhood at time t1 is observed in a shifted position at time t2.

27 The image flow vector V=[  x,  y] maps intensity neighborhood N1 of (x,y) at t1 to an identical neighborhood N2 of (x+  x,y+  y) at t2, which yields: Combining we get the image flow equation Image Flow Equation Using the continuity of the intensity function and Taylor series we get: which gives not a solution but a linear constraint on the flow.

28 Meaning of image flow equation  f  t =  f  [ δx, δy]  t the change in the image function f over time = the dot product of the spatial gradient  f and the flow vector V = [ δx, δy]

29 Segmenting videos Build video segment database Scene change is a change of environment: newsroom to street Shot change is a change of camera view of same scene Camera pan and zoom, as before Fade, dissolve, wipe are used for transitions Fade is similar to blend of Project 3

30 Scene change

31 Detect via histogram change (Top) gray level histogram of intensities from frame 1 in newsroom. (Middle) histogram of intensities from frame 2 in newsroom. (Bottom) histogram of intensities from street scene. Histograms change less with pan and zoom of same scene.

32 Daniel Gatica Perez’s work on describing video content

33 Hierarchical Description of Video Content: MPEG-7 Definitions of DESCRIPTIONS for indexing, retrieval, and filtering of visual content. Syntax and semantics of elementary features, and their relations and structure.

34 Our problem (II): Finding Video Structure Video Structure: hierarchical description of visual content Table of Contents From thousands of raw frames to video events

35 Hierarchical Structure in Video: Extensive Operators Shots: Consecutive frames recorded from a single camera Shot Clusters: Collection of temporally adjacent/visually similar shots Cluster Scenes: Semantic Concept. Fair to use? Scene Video Sequence Sequence Frame

36 One scenario: home video analysis Accessing consumer video Organizing and editing personal memories The problems: –Lack of Storyline –Unrestricted Content –Random Quality –Non-edited –Changes of Appearance –With/without time-stamps –Non-continuous audio

37 Our approach Investigate statistical models for consumer video Bayesian formulation for video structuting Encode prior knowledge Learning models from data Hierarchical representation of video segments: extensive partition operators. User interface visualization, correction, reorganization of Table Of Contents

38 Our Approach TEMPORAL PARTITION GENERATION VIDEO SHOT FEATURE EXTRACTION PROBABILISTIC HIERARCHICAL CLUSTERING CONSTRUCTION OF VIDEO SEGMENT TREE VIDEO SEQUENCE

39 Video Structuring Results (I) 35 shots 9 clusters detected

40 Video Structuring Results (II) 12 shots 4 clusters

41 Tree-based Video Representation

42 Motion analysis on current frontier of computer vision Surveillance and security Video segmentation and indexing Robotics and autonomous navigation Biometric diagnostics Human/computer interfaces