Presentation is loading. Please wait.

Presentation is loading. Please wait.

Pose Estimation OHAD MOSAFI. Motivation  Detecting people in images is a key problem for video indexing, browsing and retrieval.  the main difficulties.

Similar presentations


Presentation on theme: "Pose Estimation OHAD MOSAFI. Motivation  Detecting people in images is a key problem for video indexing, browsing and retrieval.  the main difficulties."— Presentation transcript:

1 Pose Estimation OHAD MOSAFI

2 Motivation  Detecting people in images is a key problem for video indexing, browsing and retrieval.  the main difficulties are the large appearance variations caused by action, clothing, illumination, viewpoint and scale.

3 Finding people by sampling  People can be quite accurately modeled as assemblies of cylinders and these assemblies are constrained by the kinematics of human joints.  People are presented as collections of nine body segments one for the torso and two for each limb.

4 Finding people by sampling  Symmetries- pairs of edge elements that are approximately symmetric about some symmetry axis and whose tangents are approximately parallel to that axis.  Segments- extended group of symmetries which approximately share the same axis (expectation- maximization algorithm that assumes a fixed number of segments).  Using learned likelihood model to form assemblies by sampling.  Finally the set of assemblies is replaced with a smaller set of representatives, which are used to count people in the image.

5

6 Thumbtack

7 Maximum Likelihood

8

9

10 Log-likelihood

11

12 Log-likelihood Likelihood Reconsider thumbtack: 8 up, 2 down.

13 Log-likelihood Likelihood

14 Finding Segments Using EM A symmetry fits a segment best when the midpoint of the symmetry lies on the segment’s symmetry axis, the endpoints lie half a segment width away from the axis, and the symmetry is perpendicular to the axis This yields the conditional likelihood for a symmetry given a segment as a four-dimensional Gaussian, and an EM algorithm can now fit a fixed number of segments to the symmetries.

15 Mixture of Gaussians

16

17 Expectation Maximization

18

19 EM Derivation

20 EM Derivation(cont.)

21 Jensen’s inequality

22 EM Derivation(cont.)

23 EM for Mixture of Gaussians

24

25

26 Representing Likelihood for people

27 Marginal Likelihood

28 Building Assemblies Incrementally by Resampling

29 Building Assemblies Incrementally by Resampling (cont)

30 Directing the Sampler

31 Example

32 Example(cont.)  Now let us supposed that we have fixed a torso and, possible, some limbs, and we want to add the left arm that would maximize the likelihood of the result.  First, we will find the highest-likelihood left arm for each choice of the upper arm. Since no feature involves the left arm and any other limb, we can choose the best left arm by considering all the pairs of torso(fixed) and a left arm, and choosing the one with the largest marginal likelihood.

33 Counting people  This sampling algorithm allows to count people in images. To estimate the number of people. We begin by selecting a small set of representative assemblies in the image and then use them for counting  We break the set of all assemblies in the image into (not necessarily disjoint) blocks — sets of assemblies such that any two assemblies from the same block have overlapping torsos. Then, the representative is chosen from each block as the assembly with the highest likelihood, over all assemblies available from the block.

34

35 Results  To learn the likelihood model, they used a set of 193 training images. (standing against a uniform background, all the views were frontal and all limbs were visible, although the configurations varied).  The training set was expanded by adding the mirror image of each assembly, thus resulting in 386 configurations.  The test set included: o 145 control images with no people o 228 – 1 people, people, 65 – 3 people. (COREL database)

36

37

38

39 Pictorial Structure of People

40

41

42 Detecting Body Parts The ultimate goal is to detect people and label them with detailed part locations, in applications where the person may be in any pose and partly occluded. Detecting and labelling body parts is a central problem in all component-based approaches. Clearly the image must be scanned at all relevant locations and scales, but there is also a question of how to handle different part orientations, especially for small, mobile, highly articulated parts such as arms and hands.

43

44 Feature Sets

45

46 Training  Using the 2016-dimensional feature vectors for all body parts we trained two linear classifiers for each part: o Support Vector Machine o Relevance Vector Machine. training on the smallest sets of examples that give reasonable results — in our case, about 100.

47 SVM- Linear Separators Binary classification can be viewed as the task of separating classes in feature space: w T x + b = 0 w T x + b < 0 w T x + b > 0 f(x) = sign(w T x + b)

48 Linear Separators  Which of the linear separators is optimal ?

49 SVM- Classification Margin Distance from example x i to the separator is Examples closest to the hyperplane are support vectors. Margin ρ of the separator is the distance between support vectors. r ρ

50 Maximum Margin Classification  Maximizing the margin is good according to intuition  Implies that only support vectors matter; other training examples are ignorable.

51 Linear SVMs Mathematically

52

53 we can formulate the quadratic optimization problem Which can be reformulated as: Find w and b such that is maximized and for all (x i, y i ), i=1..n : y i (w T x i + b) ≥ 1 Find w and b such that Φ(w) = ||w|| 2 =w T w is minimized and for all (x i, y i ), i=1..n : y i (w T x i + b) ≥ 1

54 Solving the Optimization Problem Need to optimize a quadratic function subject to linear constraints. Quadratic optimization problems are a well-known class of mathematical programming problems for which several (non-trivial) algorithms exist. Find w and b such that Φ(w) =w T w is minimized and for all (x i, y i ), i=1..n : y i (w T x i + b) ≥ 1

55 Solving the Optimization Problem  The solution involves constructing a dual problem where a Lagrange multiplier α i is associated with every inequality constraint in the primal (original) problem: Find α 1 …α n such that Q(α) =Σα i - ½ΣΣα i α j y i y j x i T x j is maximized and (1) Σα i y i = 0 (2) α i ≥ 0 for all α i

56 Quadratic Program

57 SVM-The Optimization Problem Solution Given a solution α 1 …α n to the dual problem, solution to the primal is: Each non-zero α i indicates that corresponding x i is a support vector. Then the classifying function is (note that we don’t need w explicitly): Notice that it relies on an inner product between the test point x and the support vectors x i – we will return to this later. Also keep in mind that solving the optimization problem involved computing the inner products x i T x j between all training points. w =Σα i y i x i b = y k - Σα i y i x i T x k for any α k > 0 f(x) = Σα i y i x i T x + b

58 Not linearly separable cases

59 Soft Margin Classification  What if the training set is not linearly separable?  Slack variables ξ i can be added to allow misclassification of difficult or noisy examples, resulting margin called soft.

60 Soft Margin Classification Mathematically  The old formulation: Find w and b such that Φ(w) =w T w is minimized and for all (x i,y i ), i=1..n : y i (w T x i + b) ≥ 1

61 Soft Margin Classification Mathematically  Modified formulation incorporates slack variables:  Parameter C can be viewed as a way to control overfitting  it “trades off” the relative importance of maximizing the margin and fitting the training data. Find w and b such that Φ(w) =w T w + CΣξ i is minimized and for all (x i,y i ), i=1..n : y i (w T x i + b) ≥ 1 – ξ i,, ξ i ≥ 0

62 Soft Margin Classification – Solution Dual problem is identical to separable case (would not be identical if the 2-norm penalty for slack variables CΣξ i 2 was used in primal objective, we would need additional Lagrange multipliers for slack variables): Find α 1 …α N such that Q(α) =Σα i - ½ΣΣα i α j y i y j x i T x j is maximized and (1) Σα i y i = 0 (2) 0 ≤ α i ≤ C for all α i

63 Non-linear SVMs

64 Non-linear SVMs: Feature spaces General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable:

65 The “Kernel Trick”  The linear classifier relies on inner product between vectors K(x i,x j )=x i T x j  If every datapoint is mapped into high-dimensional space via some transformation Φ: x → φ(x), the inner product becomes:  K(x i,x j )= φ(x i ) T φ(x j )  A kernel function is a function that is equivalent to an inner product in some feature space  Thus, a kernel function implicitly maps data to a high-dimensional space (without the need to compute each φ(x) explicitly)

66 The “Kernel Trick” Example 2-dimensional vectors x=[x 1 x 2 ]; let K(x i,x j )=(1 + x i T x j ) 2, Need to show that K(x i,x j )= φ(x i ) T φ(x j ): K(x i,x j )= (1 + x i T x j ) 2, = ( 1+ x i1 x j1 + x i2 x j2 ) 2 = [1 x i1 2 √2 x i1 x i2 x i2 2 √2x i1 √2x i2 ] T [1 x j1 2 √2 x j1 x j2 x j2 2 √2x j1 √2x j2 ] = φ(x i ) T φ(x j ), where φ(x) = [1 x 1 2 √2 x 1 x 2 x 2 2 √2x 1 √2x 2 ]

67 Examples of Kernel Functions  Linear: K(x i,x j )= x i T x j o Mapping Φ: x → φ(x), where φ(x) is x itself  Polynomial of power p: K(x i,x j )= (1+ x i T x j ) p  Gaussian (radial-basis function): K(x i,x j ) = o Mapping Φ: x → φ(x), where φ(x) is infinite-dimensional o Decreases with distance and ranges between zero (in the limit) and one (when x = x'), it has a ready interpretation as a similarity measure

68 Non-linear SVMs Mathematically Dual problem formulation: The solution is: Find α 1 …α n such that Q(α) =Σα i - ½ΣΣα i α j y i y j K(x i, x j ) is maximized and (1) Σα i y i = 0 (2) α i ≥ 0 for all α i f(x) = Σα i y i K(x i, x)+ b

69 RVM -Relevance Vector Machines Bayesian alternative to support vector machine. Limitations of the SVM: ◦two classes ◦large number of kernels. ◦decisions at outputs instead of probabilities

70 Parsing Algorithm

71

72 Learning the Body Tree The articulation model is a linear combination of the differences between two joint locations. Using positive and negative examples from the training set, they used a linear SVM classifier to learn a set of weights such that the score is positive for all positive examples, and negative for all negative examples.

73 Results- Detection of body parts The RVM classifiers perform only slightly worse than their SVM counterparts, with mean false detection rates of 80.1% and 78.5% respectively. The worst results are obtained for the torso and head models, The torso is probably the hardest body part to detect as it is almost entirely shapeless. It is probably best detected indirectly from geometric clues. In contrast, the head is known to contain highly discriminant features, but the training images contain a wide range of poses and significantly more training data

74

75 Results- Detection of Body Trees  100 training examples.  We obtained correct detections rates of 72 % using RVM scores and 83 % using SVM scores.  a majority of the body parts were correctly positioned in only 36 % of the test images for RVM and 55 % for SVM. Second experiment:  200 training examples.  Detention rate 76% for SVM and 88% for RVM.

76 References  Finding People by Sampling Ioffe & Forsyth, ICCV 1999  Probabilistic Methods for Finding People, Ioffe & Forsyth 2001  Pictorial Structure Models for Object Recognition Felzenszwalb & Huttenlocher, 2000  Learning to Parse Pictures of People Ronfard, Schmid & Triggs, ECCV 2002


Download ppt "Pose Estimation OHAD MOSAFI. Motivation  Detecting people in images is a key problem for video indexing, browsing and retrieval.  the main difficulties."

Similar presentations


Ads by Google