Presentation is loading. Please wait.

Presentation is loading. Please wait.

1 Video Processing Lecture on the image part (8+9) Automatic Perception 12+15 Volker Krüger Aalborg Media Lab Aalborg University Copenhagen

Similar presentations


Presentation on theme: "1 Video Processing Lecture on the image part (8+9) Automatic Perception 12+15 Volker Krüger Aalborg Media Lab Aalborg University Copenhagen"— Presentation transcript:

1 1 Video Processing Lecture on the image part (8+9) Automatic Perception 12+15 Volker Krüger Aalborg Media Lab Aalborg University Copenhagen vok@media.aau.dk

2 2 Agenda Segmentation in video (finding the object(s)) –MED3 applications Small and gentle intro to statistics Probably next time: Tracking: follow the object(s) over time What to remember

3 3 Segmentation in Video

4 4 Videos Videos are Image Sequences over Time y x t An image is a function over x and y: A video is a function of images over time t: At each time step t we have an image Framerate = the number of images per second, e.g., 25 Images/s

5 5 Segmentation in Video General Segmentation is application-dependent!  Knowledge base!! One application: Finding the object(s) –MED3 applications –Preprocessing, Segmentation Tracking = follow the object(s) over time: –being able at each time t to give, e.g, position, color, orientation, etc. –Representation, Description Knowledge base Problem domain Image acquisition Preprocessing Segmentation Representation and description Recognition and Interpretation Result

6 6 Segmentation Lots of applications!! One is: Separation of Foreground (object) and Background (everything else = noise) Result could be a –Binary image, containing foreground only –Probability image, containing the likelihood of each pixel being foreground Useful for further processing, such as using silhouettes, etc. Approaches –Motion-based –Color-based –Some approaches can learn! (demos!)

7 7 Foreground-Background Segmentation using Motion and Color

8 8 Motion-based –Model-free –No learning –Image differencing Color-based –Background subtraction Background used as a model No learning –Advanced background subtraction Background is learned –Very advanced background subtraction Background is learned

9 9 Image Differencing

10 10 Image Differencing The motion in an image can be found by subtracting the current image from the previous image Algorithm 1.Save image in last frame 2.Capture current camera image 3.Subtract image (= difference = motion) 4.Threshold 5.Delete noise

11 11 3. Subtract Image Compute pixel-wise Subtract previous image from input image: Usually the absolute distance is applied 1.Save image in last frame 2.Capture camera image 3.Subtract image 4.Threshold 5.Delete noise

12 12 4. Threshold Decide, when a pixel is supposed to be considered as a background pixel, or when it is to be considered as a foreground pixel: Pixel is foreground pixel, if Pixel is background pixel, if Problem: What TH?!? 1.Save image in last frame 2.Capture camera image 3.Subtract image 4.Threshold 5.Delete noise

13 13 5. Deleting Noise Singular pixels are likely to appear: –Pixel-noise!! Apply Median filter: –Depending on filter size, bigger spots can be erased Alternative: Morphologic 1.Save image in last frame 2.Capture camera image 3.Subtract image 4.Threshold 5.Delete noise (show: patch: diff)

14 14 Background Subtraction

15 15 Background Subtraction Foreground is moving, background is stable Algorithm 1.Capture image containing background 2.Capture camera image 3.Subtract image (= difference = motion) 4.Threshold 5.Delete noise (show: patch: bg_1)

16 16 Advanced Background Subtraction

17 17 Advanced Background Subtraction What if we have small motion in the background? –Bushed, leaves, etc. and noise in the camera/lighting –(show histo patch) Learn(!) the background Capture N images and calculate the average background image (no object present) 1.Calculate average background image 2.Capture camera image 3.Subtract image (= motion) 4.Threshold 5.Delete noise

18 18 Very Advanced Background Subtraction

19 19 Very Advanced Background Subtraction Use Neighborhood relation!! –Compare pixel with its neighbors!! –Weight them!! Learn the background and its variations!! –E.g. Gaussian models (mean,var) for each pixel!!! –E.g. a Histogram for each Pixel  Both will be revisited, later!! –The more images you train on the better!! –Idea: Some pixel may vary more than other pixels –Algorithm: Consider each pixel (x,y) in the input image and check, how much it varies with respect to the mean and variance of the learned Gaussian models? 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

20 20 Weight the Distances, Correlation between Pixel values If one pixel is considered to be a foreground pixel –Its neighbor is also likely to be a foreground pixel –If its neighbor is not considered to be a foreground pixel, one of the two might be wrong –Neighboring pixels are highly correlated (similar) 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

21 21 Weight the Distances What does a pixel say about a foreground pixel, that is further away? –Pixels with an increasing distance to each other are saying less about each other –The correlation between pixels degreases with distance 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

22 22 Weight the Distances Use a Gaussian for Weighting To test a pixel I(x,y) –Center a Gaussian on this pixel and weight the neighboring pixels accordingly => Convolution! 1D example: 6048231222100 Signal: 0.250.50.25 39.5 13.75 2.25 56.75 136.3 Gaussian filter: Output:

23 23 Weight the Distances Likelihood image or Probability image, containing the likelihood for each pixel of being a background/foreground pixel 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

24 24 A little detour to Statistics

25 25 Statistics Mean –Center of gravity of the object Variance –The Variance measures the variations of the object pixel positions around the center of gravity x var is big. y var is small N is the number of object pixels

26 26 Statistics Standard deviation: sigma (  ) Normal distribution = Gaussian distribution 99.99 % 99.73 % 95.44 % 68.26 % SamplesRange (bb)

27 27 Statistics How to use it –”Automatic” thresholding based on statistics Example: the color of the hand –Training =>mean color –Algorithm: hand pixel if: TH min < pixel < Th max –How do we define TH min and Th max ? –Use statistics: TH min = mean-2  and TH max = mean+2 

28 28 Threshold According to Variance Threshold can be chosen depending on the variance –A local threshold Standard Deviation For example: –If Th min object pixel –Th min = mean –  –Th max = mean +  99.99 % 99.73 % 95.44 % 68.26 % SamplesRange 1.Calculate mean and variance for each pixel 2.Capture camera image 3.Subtract image (= motion) 4.Weight the distances (new) 5.Threshold according to variance 6.Delete noise

29 29 Segmentation using Color

30 30 Segmentation using Color Assumption: the object has a different color than the background –As seen in Chroma keying Two approaches –Color histogram –Gaussian distributions

31 31 Segmentation using Color Histogram

32 32 Segmentation using Color Histogram Given an object, defined by its color Histogram Algorithm: –Segment each pixel in the input image according to the histogram The result: binary or probability image. –White pixels: pixel in input image had color as defined by the histogram –Black pixel: pixel in input image did not have color defined by the histogram

33 33 Learning the Color Histogram Recall a grayvalue histogram The color histogram summarizes the color distribution in an image (region) There is a histogram bin for each possible color! The columns of the histogram are high for those colors appearing often in the image (region) and low for those appearing seldom in the image. The more images you train on the better!! bins

34 34 Color Segmentation with Histograms Algorithm: –For each pixel in the input I(x,y) Go to the bin having the same color as the pixel H( I(x,y) ) Assign the probability value at the bin to the output image: O(x,y) –O(x,y) = H( I(x,y) ) Using the above leads to a probability image Run a threshold on the output image to get a binary image

35 35 Segmentation using Gaussian Distributions

36 36 Segmentation with Gaussian Distribution Given an object, calculate the mean and standard deviation of its color –Represent each color component (e.g., R,G,B) by a Gaussian model –That is, a Gaussian distribution: ( mean,sigma) The same principle as in “very advanced background subtraction” !! Algorithm: –Given an input image, segment each pixel by comparing it to the Gaussian models of the object color For example: –If Th min object pixel –Th min = mean –  –Th max = mean + 

37 37 Comparing the Methods When to use a color histogram and when to use Gaussian models? Plot the training data and look at it! Gaussian models assume data is distributed equally within a rectangle Histograms do not assume anything Gaussian models require less training Histograms require a decision regarding the resolution of the bins and the threshold. Thresholds in Gaussian models have a physical interpretation r g r g

38 38 Tracking

39 39 Tracking Follow the object(s) over time –Finding the trajectory (curve connecting the positions over time) Simple tracking: Advanced tracking: –Cluttered background and multiple objects

40 40 Tracking For MED3 projects –Use tracking to predict where the object will be in the next image –This allow us to specify the ROI (region of interest) –Focus the algorithm Save computational resources Less likely that we will find an incorrect object (show: patch: color_track)

41 41 Prediction Given the position of the object in previous images, where do we think (predict) the object will be in the current image? We need a motion model The size of the search region depends on: –The uncertain of the prediction –The framerate –How fast can the object max. move? ? Search region

42 42 Motion Model Predicted position at time t: Brownian Motion: According to a Gaussian model 0’th order: 1’th order: –Similar for y 2’th order –Similar for y Many other types exist: look at your application!

43 43 What to remember Statistics: mean and variance Motion segmentation –Image differencing (two images) –Background subtraction (one bg. image) –Advanced background subtraction (many bg. image) –Very advanced background subtraction (learn each pixel) Color segmentation –Using a histogram –Gaussian models Tracking –Prediction –Motion model


Download ppt "1 Video Processing Lecture on the image part (8+9) Automatic Perception 12+15 Volker Krüger Aalborg Media Lab Aalborg University Copenhagen"

Similar presentations


Ads by Google