Presentation is loading. Please wait.

Presentation is loading. Please wait.

Background Subtraction based on Cooccurrence of Image Variations Seki, Wada, Fujiwara & Sumi - 2003 Presented by: Alon Pakash & Gilad Karni.

Similar presentations


Presentation on theme: "Background Subtraction based on Cooccurrence of Image Variations Seki, Wada, Fujiwara & Sumi - 2003 Presented by: Alon Pakash & Gilad Karni."— Presentation transcript:

1 Background Subtraction based on Cooccurrence of Image Variations Seki, Wada, Fujiwara & Sumi - 2003 Presented by: Alon Pakash & Gilad Karni

2 Motivation Detecting foreground objects in dynamic scenes involving swaying trees and fluttering flags.

3 Dynamic Scenes

4 Background Subtraction so far: Stationary background Permissible range of image variation Dynamic update of the background model Cooccurrence…

5 Permissible range of image variation Feature Space Input image (as vector) Background model Chosen Pixels, DCT coefficients, …

6 The Problem: Training set Background model + VARIANCE BIG variance = Detection sensitivity decreases!

7 The Solution: Dynamically narrow the permissible range… By using the Cooccurrence.

8 “Cooccurrence” What is Cooccurrence? Image variations at neighboring image blocks have strong correlation!

9 Permissible range with Cooccurrence Input image (as vector) Cooccurrence DB of background image variations Feature Space Background model without considering cooccurrence Narrowed background model

10 Cooccurrence “Is it really that good?” Partition the image: NxN Blocks In time t, block u is represented by: i (u,t)

11 Example: Sunlight changes

12 Illustrating Principal Components Analysis Our Goal: Revealing the internal structure of the data in a way which best explains the variance in the data

13 Illustrating Principal Components Analysis

14

15

16 Example: Sunlight changes

17 N x N 1 x N 2

18 e1e1 e2e2 Projection

19

20 Another Example: Tree sway Block ABlock B

21 Block A Block B

22 Cooccurrence – Cont’d Also stands for: – Higher dimension feature space – Other neighboring blocks in the picture – Fluttering flags Conclusion: Neighboring image blocks have strong correlation!

23 Background Subtraction Method The general idea: Narrow the background image variations by estimating the background image in each block from the neighboring blocks in the input image

24 e1e1 e2e2 e3e3 e1e1 e2e2 e3e3 (A,t1) (B,t1) Z* ZBZB (A,t2) (B,t2) (A,t3) (B,t3) ZAZA

25 e1e1 e2e2 e3e3 Z (B,t1) Z* ZBZB Z (B,t2) Z (B,t3)

26 Advantages Since the method utilizes the spatial property of background image variations, it is not affected by the quick image variations. The method can be applied not only to the background object motions, such as swaying tree leaves, but also to illumination variations.

27 Experiments

28 Difference Picture

29 The experiment procedure Number of dimensions? Number of neighbors?

30 Num. of Dimensions Determination of the dimensions of the eigen space: until more than 90% of the blocks are “effective”.

31 Num. of neighbors Determination of the number of neighbors: until the error (the Euclidean distance in the eigen space) is small enough. Z (B,t1) Z* ZBZB Z (B,t2) Z (B,t3)

32 Comparison to other methods Method 1: Learning in the same features space for each block, background subtraction using Mahalanobis distances. Method 2: Doesn’t use “Cooccurence”, relies only on the input pattern in the focused block. Method 3: The proposed method.

33

34

35

36

37

38

39

40

41

42

43

44

45

46

47

48

49

50

51

52

53

54

55 Belief Propagation in a 3D Spatio-temporal MRF for Moving Object Detection Yin & Collins - 2007

56 Dis\Similarity Surroundings of an element is taken into consideration Pixel Vs. Block

57 Problems solved in this method Objects camouflaged by similar appearance to the background Objects with uniform color

58 Markov Random Field +-+++- -----+ +-++-- ---++- -+-+-- -++-++ + + - P(X ij = + | X km, k≠i, m≠j) = P(X ij = + | X km, k=i±1, m=j±1)

59 With the realization that each pixel influences neighboring pixels spatially and temporally in the video sequence we develop a 3D MRF (Markov Random Field) model to represent the system.

60 Frame i Frame i+1 Frame i-1 Observed Data Hidden State

61 Hidden State & Observed Data Hidden state – represents the likelihood that a pixel contains object motion Observed data – represents the binary motion detection result

62 Relations between hidden to observed nodes If an observed node is “0” (no-motion), its corrsponding hidden node will contain a uniform distribution. Otherwise, it will contain an impulse distribution. Φ j (s k,d k ) sksk

63 Relations between hidden to hidden nodes Each hidden node encourages its neighboring nodes to have the same state. sksk Ψ jk

64 Belief Propagation In a nutshell A powerful algorithm for making approximate inferences over joint distributions defined by MRF models

65

66 Message Update Schedule Different message passing schedules have different effects on the detection process.

67

68

69

70

71

72

73

74

75

76

77 To Conclude Copes with shape changes Not affected by speed changes of the moving object Handles low resolution videos (e.g. Thermal)


Download ppt "Background Subtraction based on Cooccurrence of Image Variations Seki, Wada, Fujiwara & Sumi - 2003 Presented by: Alon Pakash & Gilad Karni."

Similar presentations


Ads by Google