Download presentation

1
Pose Estimation Ohad Mosafi

2
Motivation Detecting people in images is a key problem for video indexing, browsing and retrieval. the main difficulties are the large appearance variations caused by action, clothing, illumination, viewpoint and scale.

3
**Finding people by sampling**

People can be quite accurately modeled as assemblies of cylinders and these assemblies are constrained by the kinematics of human joints. People are presented as collections of nine body segments one for the torso and two for each limb.

4
**Finding people by sampling**

Symmetries- pairs of edge elements that are approximately symmetric about some symmetry axis and whose tangents are approximately parallel to that axis. Segments- extended group of symmetries which approximately share the same axis (expectation- maximization algorithm that assumes a fixed number of segments). Using learned likelihood model to form assemblies by sampling. Finally the set of assemblies is replaced with a smaller set of representatives, which are used to count people in the image.

6
**Thumbtack Let 𝜃 = P(up), 1- 𝜃 = P(down) How to determine 𝜃 ?**

Empirical estimate: 8 up, 2 down -> 𝜃= =0.8

7
**Maximum Likelihood 𝜃 = P(up), 1- 𝜃 = P(down) Observe:**

Likelihood of the observation sequence depends on 𝜃 𝑙 𝜃 =𝜃 1−𝜃 𝜃 1−𝜃 𝜃𝜃𝜃𝜃𝜃𝜃𝜃𝜃= 𝜃 8 1−𝜃 2

8
**Maximum Likelihood Maximum likelihood finds**

arg 𝑚𝑎𝑥 𝜃 𝑙 𝜃 = arg 𝑚𝑎𝑥 𝜃 𝜃 8 1−𝜃 2 𝜕 𝜕𝜃 𝑙 𝜃 =8 𝜃 7 1−𝜃 2 −2 𝜃 8 1−𝜃 = 𝜃 7 (1−𝜃)(8−10𝜃) Extrema at 𝜃=0, 0.8, 1 𝜃 𝑀𝐿 =0.8

9
Maximum Likelihood More generally, consider a binary-valued random variable with 𝜃=𝑝 1 , 1−𝜃=𝑝(0), assume we observe 𝑛 1 ones, and 𝑛 0 zeroes. Likelihood: l 𝜃 = 𝜃 𝑛 1−𝜃 𝑛 0 Derivative: 𝜕 𝜕𝜃 𝑙 𝜃 = 𝑛 1 𝜃 𝑛 1 −1 1−𝜃 𝑛 0 − 𝑛 0 𝜃 𝑛 −𝜃 𝑛 0 −1 Hence we have for the extrema: 𝜃=1, 0, 𝑛 1 𝑛 0 + 𝑛 1

10
Log-likelihood The function log: ℝ + →ℝ :𝑥→log(𝑥) is a monotonically increasing function of x Hence for any (positive valued) function f: 𝑎𝑟𝑔 𝑚𝑎𝑥 𝜃 𝑓 𝜃 =𝑎𝑟𝑔𝑚𝑎 𝑥 𝜃 log 𝑓𝜃 In practice often more convenient to optimize the log likelihood rather than the likelihood itself.

11
Log-likelihood Example: log 𝑙(𝜃) = log 𝜃 𝑛 −𝜃 𝑛 0 = 𝑛 1 log 𝜃 + 𝑛 0 log (1−𝜃) 𝜕 𝜕𝜃 log 𝑙(𝜃) = 𝑛 1 1 𝜃 + 𝑛 0 −1 1−𝜃 = 𝑛 1 −( 𝑛 1 + 𝑛 0 )𝜃 𝜃(1−𝜃) →𝜃= 𝑛 1 𝑛 1 + 𝑛 0

12
**Log-likelihood <- -> Likelihood**

Reconsider thumbtack: 8 up, 2 down.

13
**Log-likelihood <- -> Likelihood**

𝜆 𝑥 1 + 1−𝜆 𝑥 2

14
**Finding Segments Using EM**

A symmetry fits a segment best when the midpoint of the symmetry lies on the segment’s symmetry axis, the endpoints lie half a segment width away from the axis, and the symmetry is perpendicular to the axis This yields the conditional likelihood for a symmetry given a segment as a four-dimensional Gaussian, and an EM algorithm can now fit a fixed number of segments to the symmetries.

15
Mixture of Gaussians

16
Mixture of Gaussians

17
**Expectation Maximization**

18
**Expectation Maximization**

19
EM Derivation

20
EM Derivation(cont.)

21
Jensen’s inequality

22
EM Derivation(cont.)

23
**EM for Mixture of Gaussians**

24
**EM for Mixture of Gaussians**

25
**EM for Mixture of Gaussians**

26
**Representing Likelihood for people**

The likelihood for a nine segment assembly is computed from a set of 41 geometric features. (Angles and distances between segments, length ratios, etc). Features are independent with a relatively small error. The main errors will be due to interactions between kinematic constraints on the hips and shoulders, and viewing pose. 𝐿 𝐴 = 𝑖=1 41 𝑑 𝑖 ( 𝑓 𝑖 )

27
Marginal Likelihood Given a set of independent identically distributed data points 𝕏=( 𝑥 1 ,…, 𝑥 𝑛 ) where 𝑥 𝑖 ~ 𝑝( 𝑥 𝑖 ,𝜃) according to some distribution parameterized by 𝜃, whee 𝜃 itself is a random variable described by the distribution 𝜃~𝑝(𝜃,𝛼), the marginal likelihood in general asks what's the probability 𝑝(𝕏,𝛼) is, where 𝜃 has been marginalized out

28
**Building Assemblies Incrementally by Resampling**

Fixed Permutation ( 𝑙 1 ,…, 𝑙 9 ) of labels {T,LUA,…}. Generate a sequence 𝑆 1 ,…, 𝑆 9 of multisets of samples. 𝑙 1 ,…, 𝑙 9 = 𝑇, 𝐿𝑈𝐴, 𝐿𝐿𝐴 → 𝑆 3 =( 𝑠 𝑇 , 𝑠 𝐿𝑈𝐴 , 𝑠 𝐿𝐿𝐴 ) fig. 2 The samples in 𝑆 𝑘 are drown from marginal likelihood, 𝐿 𝑙 1 … 𝑙 𝑘 = 𝑖 𝑑 𝑖 ( 𝑓 𝑖 ) .

29
**Building Assemblies Incrementally by Resampling (cont)**

We generate the set of samples 𝑆 𝑘+1 from 𝑆 𝑘 using importance sampling. a general technique for estimating properties of a particular distribution, while only having samples generated from a different distribution from the distribution of interest Form the set of sub-assemblies (𝑠 𝑙 1 ,…, 𝑠 𝑙 𝑘 , 𝑠 𝑙 𝑘+1 ) for all groups (𝑠 𝑙 1 ,…, 𝑠 𝑙 𝑘 )∊ 𝑆 𝑘 and all choices of 𝑠 𝑙 𝑘+1 .

30
Directing the Sampler We define equivalent assemblies to be those that label the same segment as a torso. (This is a good choice, because different people in an image will tend to have their torsos in different places) The highest likelihood assembly is found by a simple greedy algorithm. If there are 𝑛 segments in the image, we never have more than n sub- assemblies of each type, thus the algorithm run in 𝑂( 𝑛 2 ) time.

31
Example suppose that all of the segments in an assembly, except the lower left arm, are fixed. We are to choose the lower left arm that maximizes the likelihood of the resulting assembly. In our model, The lower left arm can be find by considering all the pairs of lower left arm (which can be any segment) and the upper left arm(which is fixed), and choosing the one with the highest marginal likelihood 𝐿 𝐿𝑈𝐴,𝐿𝐿𝐴

32
Example(cont.) Now let us supposed that we have fixed a torso and, possible, some limbs, and we want to add the left arm that would maximize the likelihood of the result. First, we will find the highest-likelihood left arm for each choice of the upper arm. Since no feature involves the left arm and any other limb, we can choose the best left arm by considering all the pairs of torso(fixed) and a left arm, and choosing the one with the largest marginal likelihood.

33
Counting people This sampling algorithm allows to count people in images. To estimate the number of people. We begin by selecting a small set of representative assemblies in the image and then use them for counting We break the set of all assemblies in the image into (not necessarily disjoint) blocks — sets of assemblies such that any two assemblies from the same block have overlapping torsos. Then, the representative is chosen from each block as the assembly with the highest likelihood, over all assemblies available from the block.

34
**בתמונות a אנחנו יכולים לראות את ה representatives שמתאימים לתצורה של האנשים בתמונה ובתמונה b לא.**

35
Results To learn the likelihood model , they used a set of 193 training images. (standing against a uniform background, all the views were frontal and all limbs were visible, although the configurations varied). The training set was expanded by adding the mirror image of each assembly, thus resulting in 386 configurations. The test set included: 145 control images with no people 228 – 1 people, people, 65 – 3 people. (COREL database)

39
**Pictorial Structure of People**

People are represented using a 2D articulated appearance model composed of 15 part-aligned image rectangles surrounding the projections of body parts. Each body part 𝑃 𝑖 is a rectangle parameterized in image coordinates by its center [𝑥 𝑖 , 𝑦 𝑖 ], its length or size 𝑠 𝑖 and orientation θ 𝑖 .

41
**Pictorial Structure of People**

The posterior likelihood of there being a body with parts 𝑃 𝑖 at image location 𝑙 𝑖 is the product of the data likelihood for the 15 parts and the prior likelihoods for the 14 joints: 𝐿 𝐴 =− 𝑖 log 𝑝 𝑖 𝑙 𝑖 − 𝑖,𝑗 ∊𝐸 𝑑 𝑖,𝑗 ( 𝑙 𝑖 , 𝑙 𝑗 ) 𝑑 𝑖,𝑗 ( 𝑙 𝑖 , 𝑙 𝑗 )- function measuring the degree of deformation of the model when part 𝑖 is placed at location 𝑙 𝑖 and part 𝑗 is located at location 𝑙 𝑗 .

42
Detecting Body Parts The ultimate goal is to detect people and label them with detailed part locations, in applications where the person may be in any pose and partly occluded. Detecting and labelling body parts is a central problem in all component-based approaches. Clearly the image must be scanned at all relevant locations and scales, but there is also a question of how to handle different part orientations, especially for small, mobile, highly articulated parts such as arms and hands.

43
**Figure 2. We learn 15 Support Vector or**

Relevance Vector Machines for the individual parts and the whole body, and during detection run each of them over the scale-orientation-position feature pyramid, t

44
Feature Sets numerous feature sets have been suggested, including image pixel values, wavelet coefficients and Gaussian derivatives. a feature set consisting of the Gaussian filtered image and its first and second derivatives is used. The feature vector for an image rectangle at location-scale orientation [ 𝑥 𝑖 , 𝑦 𝑖 , 𝑠 𝑖 , θ 𝑖 ] contains the absolute values of the responses of the six Gaussian: in the rectangle’s 14X24 windows. Thus, there are 14X24X6=2016 features per body.

46
**Training Using the 2016-dimensional feature vectors for all body parts**

we trained two linear classifiers for each part: Support Vector Machine Relevance Vector Machine. training on the smallest sets of examples that give reasonable results — in our case, about 100.

47
**SVM- Linear Separators**

Binary classification can be viewed as the task of separating classes in feature space: wTx + b = 0 wTx + b > 0 wTx + b < 0 f(x) = sign(wTx + b)

48
Linear Separators Which of the linear separators is optimal ?

49
**SVM- Classification Margin**

Distance from example xi to the separator is Examples closest to the hyperplane are support vectors. Margin ρ of the separator is the distance between support vectors. ρ r

50
**Maximum Margin Classification**

Maximizing the margin is good according to intuition Implies that only support vectors matter; other training examples are ignorable.

51
**Linear SVMs Mathematically**

Let training set {( 𝑥 𝑖 , 𝑦 𝑖 )} 𝑖=1…𝑛 , 𝑥 𝑖 ∈ ℝ 𝑑 , 𝑦 𝑖 ∈{−1,1} be separated by a hyperplane with margin 𝑝. Then for each training example 𝑥 𝑖 , 𝑦 𝑖 : 𝑤 𝑇 𝑥 𝑖 +𝑏≤− 𝑝 2 𝑖𝑓 𝑦 𝑖 =−1 𝑤 𝑇 𝑥 𝑖 +𝑏≥ 𝑝 2 𝑖𝑓 𝑦 𝑖 =−1 ⟺ 𝑦 𝑖 𝑤 𝑇 𝑥 𝑖 +𝑏 ≥𝑝/2

52
**Linear SVMs Mathematically**

For every support vector 𝑥 𝑆 the inequality is an equality. After rescaling w and b by p/2 in the equality, we obtain that distance between each 𝑥 𝑆 and the hyperplane is Then the margin can be expressed as : 𝑝=2𝑟= 2 | 𝑤 |

53
**Linear SVMs Mathematically**

we can formulate the quadratic optimization problem Which can be reformulated as: Find w and b such that is maximized and for all (xi, yi), i=1..n : yi(wTxi + b) ≥ 1 Find w and b such that Φ(w) = ||w||2=wTw is minimized and for all (xi, yi), i=1..n : yi (wTxi + b) ≥ 1

54
**Solving the Optimization Problem**

Need to optimize a quadratic function subject to linear constraints. Quadratic optimization problems are a well-known class of mathematical programming problems for which several (non-trivial) algorithms exist. Find w and b such that Φ(w) =wTw is minimized and for all (xi, yi), i=1..n : yi (wTxi + b) ≥ 1

55
**Solving the Optimization Problem**

The solution involves constructing a dual problem where a Lagrange multiplier αi is associated with every inequality constraint in the primal (original) problem: Find α1…αn such that Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and (1) Σαiyi = 0 (2) αi ≥ 0 for all αi

56
**Quadratic Program A special type of mathematical optimization problem**

It is the problem of optimizing (minimizing or maximizing) a quadratic function of several variables subject to linear constraints on these variables The quadratic programming problem can be formulated as: Assume x belongs to ℝ 𝑛 space. Both x and c are column vectors with n elements (n×1 matrices), and Q is a symmetric n×n matrix Minimize (with respect to x) 𝑓 𝑥 = 1 2 𝑥 𝑇 𝑄𝑥+ 𝑐 𝑇 𝑥

57
**SVM-The Optimization Problem Solution**

Given a solution α1…αn to the dual problem, solution to the primal is: Each non-zero αi indicates that corresponding xi is a support vector. Then the classifying function is (note that we don’t need w explicitly): Notice that it relies on an inner product between the test point x and the support vectors xi – we will return to this later. Also keep in mind that solving the optimization problem involved computing the inner products xiTxj between all training points. w =Σαiyixi b = yk - Σαiyixi Txk for any αk > 0 f(x) = ΣαiyixiTx + b

58
**Not linearly separable cases**

59
**Soft Margin Classification**

What if the training set is not linearly separable? Slack variables ξi can be added to allow misclassification of difficult or noisy examples, resulting margin called soft.

60
**Soft Margin Classification Mathematically**

The old formulation: Find w and b such that Φ(w) =wTw is minimized and for all (xi ,yi), i=1..n : yi (wTxi + b) ≥ 1

61
**Soft Margin Classification Mathematically**

Modified formulation incorporates slack variables: Parameter C can be viewed as a way to control overfitting it “trades off” the relative importance of maximizing the margin and fitting the training data. Find w and b such that Φ(w) =wTw + CΣξi is minimized and for all (xi ,yi), i=1..n : yi (wTxi + b) ≥ 1 – ξi, , ξi ≥ 0

62
**Soft Margin Classification – Solution**

Dual problem is identical to separable case (would not be identical if the 2-norm penalty for slack variables CΣξi2 was used in primal objective, we would need additional Lagrange multipliers for slack variables): Find α1…αN such that Q(α) =Σαi - ½ΣΣαiαjyiyjxiTxj is maximized and (1) Σαiyi = 0 (2) 0 ≤ αi ≤ C for all αi

63
Non-linear SVMs

64
**Non-linear SVMs: Feature spaces**

General idea: the original feature space can always be mapped to some higher-dimensional feature space where the training set is separable:

65
The “Kernel Trick” The linear classifier relies on inner product between vectors K(xi,xj)=xiTxj If every datapoint is mapped into high-dimensional space via some transformation Φ: x → φ(x), the inner product becomes: K(xi,xj)= φ(xi) Tφ(xj) A kernel function is a function that is equivalent to an inner product in some feature space Thus, a kernel function implicitly maps data to a high-dimensional space (without the need to compute each φ(x) explicitly)

66
**The “Kernel Trick” Example**

2-dimensional vectors x=[x1 x2]; let K(xi,xj)=(1 + xiTxj)2, Need to show that K(xi,xj)= φ(xi) Tφ(xj): K(xi,xj)= (1 + xiTxj)2,= ( 1+ xi1xj1 + xi2xj2 )2= [1 xi12 √2 xi1xi2 xi22 √2xi1 √2xi2]T [1 xj12 √2 xj1xj2 xj22 √2xj1 √2xj2] = φ(xi) Tφ(xj) , where φ(x) = [1 x12 √2 x1x2 x22 √2x1 √2x2]

67
**Examples of Kernel Functions**

Linear: K(xi,xj)= xiTxj Mapping Φ: x → φ(x), where φ(x) is x itself Polynomial of power p: K(xi,xj)= (1+ xiTxj)p Gaussian (radial-basis function): K(xi,xj) = Mapping Φ: x → φ(x), where φ(x) is infinite-dimensional Decreases with distance and ranges between zero (in the limit) and one (when x = x'), it has a ready interpretation as a similarity measure

68
**Non-linear SVMs Mathematically**

Dual problem formulation: The solution is: Find α1…αn such that Q(α) =Σαi - ½ΣΣαiαjyiyjK(xi, xj) is maximized and (1) Σαiyi = 0 (2) αi ≥ 0 for all αi f(x) = ΣαiyiK(xi, x)+ b

69
**RVM -Relevance Vector Machines**

Bayesian alternative to support vector machine. Limitations of the SVM: two classes large number of kernels. decisions at outputs instead of probabilities

70
Parsing Algorithm Given N candidate body part locations 𝑙 𝑘𝑛 detected by each body part classifier 𝐶 𝑘 , we are looking for a ‘parse’ of the scene into one or more ‘body trees’. Given a detecntion score 𝐷 𝑘 ( 𝑙 𝑘𝑛 ) for all candidates 𝑛=1…𝑁, we search for the best candidate as a function of their direct parents 𝑝𝑎(𝑛) in the body tree.

71
**Figure 6 shows the three most probable parses for four test images,**

ranked in order of decreasing likelihood

72
Learning the Body Tree The articulation model is a linear combination of the differences between two joint locations. Using positive and negative examples from the training set, they used a linear SVM classifier to learn a set of weights such that the score is positive for all positive examples, and negative for all negative examples.

73
**Results- Detection of body parts**

The RVM classifiers perform only slightly worse than their SVM counterparts, with mean false detection rates of 80.1% and 78.5% respectively. The worst results are obtained for the torso and head models, The torso is probably the hardest body part to detect as it is almost entirely shapeless. It is probably best detected indirectly from geometric clues. In contrast, the head is known to contain highly discriminant features, but the training images contain a wide range of poses and significantly more training data

75
**Results- Detection of Body Trees**

100 training examples. We obtained correct detections rates of 72 % using RVM scores and 83 % using SVM scores. a majority of the body parts were correctly positioned in only 36 % of the test images for RVM and 55 % for SVM. Second experiment: 200 training examples. Detention rate 76% for SVM and 88% for RVM.

76
**References Finding People by Sampling Ioffe & Forsyth, ICCV 1999**

Probabilistic Methods for Finding People, Ioffe & Forsyth 2001 Pictorial Structure Models for Object Recognition Felzenszwalb & Huttenlocher, Learning to Parse Pictures of People Ronfard, Schmid & Triggs, ECCV 2002

Similar presentations

Presentation is loading. Please wait....

OK

Pattern Recognition and Machine Learning

Pattern Recognition and Machine Learning

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on online banking project Ppt on three sectors of indian economy Ppt on special types of chromosomes defects Ppt on human chromosomes 23 Ppt on business plan Ppt on cse related topics on personality Ppt on causes of road accidents Ppt on animal farm themes Ppt on diode family tree Seminar ppt on lasers in general surgery