Presentation is loading. Please wait.

Presentation is loading. Please wait.

3D Geometry for Computer Graphics Class 3. 2 Last week - eigendecomposition A We want to learn how the transformation A works:

Similar presentations


Presentation on theme: "3D Geometry for Computer Graphics Class 3. 2 Last week - eigendecomposition A We want to learn how the transformation A works:"— Presentation transcript:

1 3D Geometry for Computer Graphics Class 3

2 2 Last week - eigendecomposition A We want to learn how the transformation A works:

3 3 Last week - eigendecomposition A If we look at arbitrary vectors, it doesn’t tell us much.

4 4 Spectra and diagonalization A If A is symmetric, the eigenvectors are orthogonal (and there’s always an eigenbasis). A = U  U T ==Au i = i u i

5 5 In real life… Matrices that have eigenbasis are rare – general transformations involve also rotations, not only scalings. We want to understand how general transformations behave. Need a generalization of eigendecomposition  SVD: A = U  V T Before we learn SVD, we’ll see Principal Component Analysis – usage of spectral analysis to analyze the shape and dimensionality of scattered data.

6 6 The plan for today First we’ll see some applications of PCA Then look at the theory.

7 7 PCA finds an orthogonal basis that best represents given data set. The sum of distances 2 from the x’ axis is minimized. PCA – the general idea x y x’ y’

8 8 PCA – the general idea PCA finds an orthogonal basis that best represents given data set. PCA finds a best approximating plane (again, in terms of  distances 2 ) 3D point set in standard basis x y z

9 9 PCA – the general idea PCA finds an orthogonal basis that best represents given data set. PCA finds a best approximating plane (again, in terms of  distances 2 ) 3D point set in standard basis

10 10 Application: finding tight bounding box An axis-aligned bounding box: agrees with the axes x y minXmaxX maxY minY

11 11 Usage of bounding boxes (bounding volumes) Serve as very simple “approximation” of the object Fast collision detection, visibility queries Whenever we need to know the dimensions (size) of the object The models consist of thousands of polygons To quickly test that they don’t intersect, the bounding boxes are tested Sometimes a hierarchy of BB’s is used The tighter the BB – the less “false alarms” we have

12 12 Finding cluster representatives

13 13 Finding cluster representatives

14 14 Application: finding tight bounding box Oriented bounding box: we find better axes! x’ y’

15 15 Application: finding tight bounding box This is not the optimal bounding box x y z

16 16 Application: finding tight bounding box Oriented bounding box: we find better axes!

17 17 Managing high-dimensional data Data base of face scans (3D geometry + texture)  10,000 points in each scan  x, y, z, R, G, B  6 numbers for each point  Thus, each scan is a 10,000*6 = 60,000-dimensional vector!

18 18 Managing high-dimensional data How to find interesting axes is this 60000-dimensional space?  axes that measures age, gender, etc…  There is hope: the faces are likely to be governed by a small set of parameters (much less than 60,000…) age axisgender axis FaceGen demo

19 19 Notations Denote our data points by x 1, x 2, …, x n  R d

20 20 The origin is zero-order approximation of our data set (a point) It will be the center of mass: It can be shown that: The origin of the new axes

21 21 Scatter matrix Denote y i = x i – m, i = 1, 2, …, n where Y is d  n matrix with y k as columns ( k = 1, 2, …, n ) YYTYT

22 22 Variance of projected points In a way, S measures variance (= scatterness) of the data in different directions. Let’s look at a line L through the center of mass m, and project our points x i onto it. The variance of the projected points x’ i is: Original setSmall varianceLarge variance L L L L

23 23 Variance of projected points Given a direction v, ||v|| = 1, the projection of x i onto L = m + vt is: v m xixi x’ i L

24 24 Variance of projected points So,

25 25 Directions of maximal variance So, we have: var(L) = Theorem: Let f : {v  R d | ||v|| = 1}  R, f (v) = (and S is a symmetric matrix). Then, the extrema of f are attained at the eigenvectors of S. So, eigenvectors of S are directions of maximal/minimal variance!

26 26 Summary so far We take the centered data points y 1, y 2, …, y n  R d Construct the scatter matrix S measures the variance of the data points Eigenvectors of S are directions of maximal variance.

27 27 Scatter matrix - eigendecomposition S is symmetric  S has eigendecomposition: S = V  V T S = v2v2 v1v1 vdvd 1 2 d v2v2 v1v1 vdvd The eigenvectors form orthogonal basis

28 28 Principal components Eigenvectors that correspond to big eigenvalues are the directions in which the data has strong components (= large variance). If the eigenvalues are more or less the same – there is no preferable direction.

29 29 Principal components There’s no preferable direction S looks like this: Any vector is an eigenvector There is a clear preferable direction S looks like this:  is close to zero, much smaller than.

30 30 How to use what we got For finding oriented bounding box – we simply compute the bounding box with respect to the axes defined by the eigenvectors. The origin is at the mean point m. v2v2 v1v1 v3v3

31 31 For approximation x y v1v1 v2v2 x y This line segment approximates the original data set The projected data set approximates the original data set x y

32 32 For approximation In general dimension d, the eigenvalues are sorted in descending order: 1  2  …  d The eigenvectors are sorted accordingly. To get an approximation of dimension d’ < d, we take the d’ first eigenvectors and look at the subspace they span ( d’ = 1 is a line, d’ = 2 is a plane…)

33 33 For approximation To get an approximating set, we project the original data points onto the chosen subspace: x i = m +  1 v 1 +  2 v 2 +…+  d’ v d’ +…+  d v d Projection : x i ’ = m +  1 v 1 +  2 v 2 +…+  d’ v d’ +0  v d’+1 +…+ 0  v d

34 34 Optimality of approximation The approximation is optimal in least-squares sense. It gives the minimal of: The projected points have maximal variance. Original setprojection on arbitrary lineprojection on v 1 axis

35 See you next time

36 36 Technical remarks: i  0, i = 1,…,d (such matrices are called positive semi- definite). So we can indeed sort by the magnitude of i Theorem: i  0   0  v Proof: Therefore, i  0   0  v


Download ppt "3D Geometry for Computer Graphics Class 3. 2 Last week - eigendecomposition A We want to learn how the transformation A works:"

Similar presentations


Ads by Google