1 Video Processing Lecture on the image part (8+9) Automatic Perception 12+15 Volker Krüger Aalborg Media Lab Aalborg University Copenhagen

Slides:



Advertisements
Similar presentations
Applications of one-class classification
Advertisements

Human Identity Recognition in Aerial Images Omar Oreifej Ramin Mehran Mubarak Shah CVPR 2010, June Computer Vision Lab of UCF.
July 27, 2002 Image Processing for K.R. Precision1 Image Processing Training Lecture 1 by Suthep Madarasmi, Ph.D. Assistant Professor Department of Computer.
Digital Image Processing In The Name Of God Digital Image Processing Lecture3: Image enhancement M. Ghelich Oghli By: M. Ghelich Oghli
Lecture 07 Segmentation Lecture 07 Segmentation Mata kuliah: T Computer Vision Tahun: 2010.
電腦視覺 Computer and Robot Vision I Chapter2: Binary Machine Vision: Thresholding and Segmentation Instructor: Shih-Shinh Huang 1.
Foreground Background detection from video Foreground Background detection from video מאת : אבישג אנגרמן.
Foreground Modeling The Shape of Things that Came Nathan Jacobs Advisor: Robert Pless Computer Science Washington University in St. Louis.
1Ellen L. Walker Edges Humans easily understand “line drawings” as pictures.
MIPR Lecture 5 Copyright Oleh Tretiak, Medical Imaging and Pattern Recognition Lecture 5 Image Measurements and Operations Oleh Tretiak.
Tracking Objects with Dynamics Computer Vision CS 543 / ECE 549 University of Illinois Derek Hoiem 04/21/15 some slides from Amin Sadeghi, Lana Lazebnik,
Segmentation Divide the image into segments. Each segment:
Objective of Computer Vision
1 Integration of Background Modeling and Object Tracking Yu-Ting Chen, Chu-Song Chen, Yi-Ping Hung IEEE ICME, 2006.
Highlights Lecture on the image part (10) Automatic Perception 16
CSE 291 Final Project: Adaptive Multi-Spectral Differencing Andrew Cosand UCSD CVRR.
1 Image filtering Images by Pawan SinhaPawan Sinha.
Multi-camera Video Surveillance: Detection, Occlusion Handling, Tracking and Event Recognition Oytun Akman.
Smart Traveller with Visual Translator. What is Smart Traveller? Mobile Device which is convenience for a traveller to carry Mobile Device which is convenience.
Binary Image Analysis. YOU HAVE TO READ THE BOOK! reminder.
Pattern Recognition. Introduction. Definitions.. Recognition process. Recognition process relates input signal to the stored concepts about the object.
1 Image filtering Hybrid Images, Oliva et al.,
Jacinto C. Nascimento, Member, IEEE, and Jorge S. Marques
CS448f: Image Processing For Photography and Vision Denoising.
Jason Li Jeremy Fowers Ground Target Following for Unmanned Aerial Vehicles.
Chapter 3 Binary Image Analysis. Types of images ► Digital image = I[r][c] is discrete for I, r, and c.  B[r][c] = binary image - range of I is in {0,1}
1. Introduction Motion Segmentation The Affine Motion Model Contour Extraction & Shape Estimation Recursive Shape Estimation & Motion Estimation Occlusion.
Computer Science 631 Lecture 7: Colorspace, local operations
CSE 185 Introduction to Computer Vision Pattern Recognition 2.
September 23, 2014Computer Vision Lecture 5: Binary Image Processing 1 Binary Images Binary images are grayscale images with only two possible levels of.
Digital Image Processing CCS331 Relationships of Pixel 1.
Remote Sensing Supervised Image Classification. Supervised Image Classification ► An image classification procedure that requires interaction with the.
Video Segmentation Prepared By M. Alburbar Supervised By: Mr. Nael Abu Ras University of Palestine Interactive Multimedia Application Development.
Ensembles. Ensemble Methods l Construct a set of classifiers from training data l Predict class label of previously unseen records by aggregating predictions.
Digital Image Processing (DIP) Lecture # 5 Dr. Abdul Basit Siddiqui Assistant Professor-FURC 1FURC-BCSE7.
Kevin Cherry Robert Firth Manohar Karki. Accurate detection of moving objects within scenes with dynamic background, in scenarios where the camera is.
Digital Camera and Computer Vision Laboratory Department of Computer Science and Information Engineering National Taiwan University, Taipei, Taiwan, R.O.C.
Digital Image Processing Lecture 10: Image Restoration March 28, 2005 Prof. Charlene Tsai.
CS654: Digital Image Analysis
Digital Image Processing Lecture 10: Image Restoration
Ch5 Image Restoration CS446 Instructor: Nada ALZaben.
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
Expectation-Maximization (EM) Case Studies
Figure ground segregation in video via averaging and color distribution Introduction to Computational and Biological Vision 2013 Dror Zenati.
Miguel Tavares Coimbra
Analyzing Expression Data: Clustering and Stats Chapter 16.
CSSE463: Image Recognition Day 29 This week This week Today: Surveillance and finding motion vectors Today: Surveillance and finding motion vectors Tomorrow:
Visual Tracking by Cluster Analysis Arthur Pece Department of Computer Science University of Copenhagen
CS 376b Introduction to Computer Vision 03 / 31 / 2008 Instructor: Michael Eckmann.
Machine Vision ENT 273 Hema C.R. Binary Image Processing Lecture 3.
ECE472/572 - Lecture 14 Morphological Image Processing 11/17/11.
Instructor: Mircea Nicolescu Lecture 5 CS 485 / 685 Computer Vision.
Image Enhancement Band Ratio Linear Contrast Enhancement
Motion tracking TEAM D, Project 11: Laura Gui - Timisoara Calin Garboni - Timisoara Peter Horvath - Szeged Peter Kovacs - Debrecen.
Over the recent years, computer vision has started to play a significant role in the Human Computer Interaction (HCI). With efficient object tracking.
Digital Image Processing Lecture 10: Image Restoration
Traffic Sign Recognition Using Discriminative Local Features Andrzej Ruta, Yongmin Li, Xiaohui Liu School of Information Systems, Computing and Mathematics.
Tracking Objects with Dynamics
- photometric aspects of image formation gray level images
Recap from Wednesday Spectra and Color Light capture in cameras and humans.
CIS 350 – 3 Image ENHANCEMENT SPATIAL DOMAIN
Image filtering Images by Pawan Sinha.
Image filtering Images by Pawan Sinha.
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Introduction to Sensor Interpretation
Midterm Exam Closed book, notes, computer Similar to test 1 in format:
Computer and Robot Vision I
CSSE463: Image Recognition Day 29
Introduction to Sensor Interpretation
Presentation transcript:

1 Video Processing Lecture on the image part (8+9) Automatic Perception Volker Krüger Aalborg Media Lab Aalborg University Copenhagen

2 Agenda Segmentation in video (finding the object(s)) –MED3 applications Small and gentle intro to statistics Probably next time: Tracking: follow the object(s) over time What to remember

3 Segmentation in Video

4 Videos Videos are Image Sequences over Time y x t An image is a function over x and y: A video is a function of images over time t: At each time step t we have an image Framerate = the number of images per second, e.g., 25 Images/s

5 Segmentation in Video General Segmentation is application-dependent!  Knowledge base!! One application: Finding the object(s) –MED3 applications –Preprocessing, Segmentation Tracking = follow the object(s) over time: –being able at each time t to give, e.g, position, color, orientation, etc. –Representation, Description Knowledge base Problem domain Image acquisition Preprocessing Segmentation Representation and description Recognition and Interpretation Result

6 Segmentation Lots of applications!! One is: Separation of Foreground (object) and Background (everything else = noise) Result could be a –Binary image, containing foreground only –Probability image, containing the likelihood of each pixel being foreground Useful for further processing, such as using silhouettes, etc. Approaches –Motion-based –Color-based –Some approaches can learn! (demos!)

7 Foreground-Background Segmentation using Motion and Color

8 Motion-based –Model-free –No learning –Image differencing Color-based –Background subtraction Background used as a model No learning –Advanced background subtraction Background is learned –Very advanced background subtraction Background is learned

9 Image Differencing

10 Image Differencing The motion in an image can be found by subtracting the current image from the previous image Algorithm 1.Save image in last frame 2.Capture current camera image 3.Subtract image (= difference = motion) 4.Threshold 5.Delete noise

11 3. Subtract Image Compute pixel-wise Subtract previous image from input image: Usually the absolute distance is applied 1.Save image in last frame 2.Capture camera image 3.Subtract image 4.Threshold 5.Delete noise

12 4. Threshold Decide, when a pixel is supposed to be considered as a background pixel, or when it is to be considered as a foreground pixel: Pixel is foreground pixel, if Pixel is background pixel, if Problem: What TH?!? 1.Save image in last frame 2.Capture camera image 3.Subtract image 4.Threshold 5.Delete noise

13 5. Deleting Noise Singular pixels are likely to appear: –Pixel-noise!! Apply Median filter: –Depending on filter size, bigger spots can be erased Alternative: Morphologic 1.Save image in last frame 2.Capture camera image 3.Subtract image 4.Threshold 5.Delete noise (show: patch: diff)

14 Background Subtraction

15 Background Subtraction Foreground is moving, background is stable Algorithm 1.Capture image containing background 2.Capture camera image 3.Subtract image (= difference = motion) 4.Threshold 5.Delete noise (show: patch: bg_1)

16 Advanced Background Subtraction

17 Advanced Background Subtraction What if we have small motion in the background? –Bushed, leaves, etc. and noise in the camera/lighting –(show histo patch) Learn(!) the background Capture N images and calculate the average background image (no object present) 1.Calculate average background image 2.Capture camera image 3.Subtract image (= motion) 4.Threshold 5.Delete noise

18 Very Advanced Background Subtraction

19 Very Advanced Background Subtraction Use Neighborhood relation!! –Compare pixel with its neighbors!! –Weight them!! Learn the background and its variations!! –E.g. Gaussian models (mean,var) for each pixel!!! –E.g. a Histogram for each Pixel  Both will be revisited, later!! –The more images you train on the better!! –Idea: Some pixel may vary more than other pixels –Algorithm: Consider each pixel (x,y) in the input image and check, how much it varies with respect to the mean and variance of the learned Gaussian models? 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

20 Weight the Distances, Correlation between Pixel values If one pixel is considered to be a foreground pixel –Its neighbor is also likely to be a foreground pixel –If its neighbor is not considered to be a foreground pixel, one of the two might be wrong –Neighboring pixels are highly correlated (similar) 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

21 Weight the Distances What does a pixel say about a foreground pixel, that is further away? –Pixels with an increasing distance to each other are saying less about each other –The correlation between pixels degreases with distance 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

22 Weight the Distances Use a Gaussian for Weighting To test a pixel I(x,y) –Center a Gaussian on this pixel and weight the neighboring pixels accordingly => Convolution! 1D example: Signal: Gaussian filter: Output:

23 Weight the Distances Likelihood image or Probability image, containing the likelihood for each pixel of being a background/foreground pixel 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

24 A little detour to Statistics

25 Statistics Mean –Center of gravity of the object Variance –The Variance measures the variations of the object pixel positions around the center of gravity x var is big. y var is small N is the number of object pixels

26 Statistics Standard deviation: sigma (  ) Normal distribution = Gaussian distribution % % % % SamplesRange (bb)

27 Statistics How to use it –”Automatic” thresholding based on statistics Example: the color of the hand –Training =>mean color –Algorithm: hand pixel if: TH min < pixel < Th max –How do we define TH min and Th max ? –Use statistics: TH min = mean-2  and TH max = mean+2 

28 Threshold According to Variance Threshold can be chosen depending on the variance –A local threshold Standard Deviation For example: –If Th min object pixel –Th min = mean –  –Th max = mean +  % % % % SamplesRange 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

29 Segmentation using Color

30 Segmentation using Color Assumption: the object has a different color than the background –As seen in Chroma keying Two approaches –Color histogram –Gaussian distributions

31 Segmentation using Color Histogram

32 Segmentation using Color Histogram Given an object, defined by its color Histogram Algorithm: –Segment each pixel in the input image according to the histogram The result: binary or probability image. –White pixels: pixel in input image had color as defined by the histogram –Black pixel: pixel in input image did not have color defined by the histogram

33 Learning the Color Histogram Recall a grayvalue histogram The color histogram summarizes the color distribution in an image (region) There is a histogram bin for each possible color! The columns of the histogram are high for those colors appearing often in the image (region) and low for those appearing seldom in the image. The more images you train on the better!! bins

34 Color Segmentation with Histograms Algorithm: –For each pixel in the input I(x,y) Go to the bin having the same color as the pixel H( I(x,y) ) Assign the probability value at the bin to the output image: O(x,y) –O(x,y) = H( I(x,y) ) Using the above leads to a probability image Run a threshold on the output image to get a binary image

35 Segmentation using Gaussian Distributions

36 Segmentation with Gaussian Distribution Given an object, calculate the mean and standard deviation of its color –Represent each color component (e.g., R,G,B) by a Gaussian model –That is, a Gaussian distribution: ( mean,sigma) The same principle as in “very advanced background subtraction” !! Algorithm: –Given an input image, segment each pixel by comparing it to the Gaussian models of the object color For example: –If Th min object pixel –Th min = mean –  –Th max = mean + 

37 Comparing the Methods When to use a color histogram and when to use Gaussian models? Plot the training data and look at it! Gaussian models assume data is distributed equally within a rectangle Histograms do not assume anything Gaussian models require less training Histograms require a decision regarding the resolution of the bins and the threshold. Thresholds in Gaussian models have a physical interpretation r g r g

38 Tracking

39 Tracking Follow the object(s) over time –Finding the trajectory (curve connecting the positions over time) Simple tracking: Advanced tracking: –Cluttered background and multiple objects

40 Tracking For MED3 projects –Use tracking to predict where the object will be in the next image –This allow us to specify the ROI (region of interest) –Focus the algorithm Save computational resources Less likely that we will find an incorrect object (show: patch: color_track)

41 Prediction Given the position of the object in previous images, where do we think (predict) the object will be in the current image? We need a motion model The size of the search region depends on: –The uncertain of the prediction –The framerate –How fast can the object max. move? ? Search region

42 Motion Model Predicted position at time t: Brownian Motion: According to a Gaussian model 0’th order: 1’th order: –Similar for y 2’th order –Similar for y Many other types exist: look at your application!

43 What to remember Statistics: mean and variance Motion segmentation –Image differencing (two images) –Background subtraction (one bg. image) –Advanced background subtraction (many bg. image) –Very advanced background subtraction (learn each pixel) Color segmentation –Using a histogram –Gaussian models Tracking –Prediction –Motion model