Presentation is loading. Please wait.

Presentation is loading. Please wait.

CENG 789 – Digital Geometry Processing 06- Mesh Generation

Similar presentations


Presentation on theme: "CENG 789 – Digital Geometry Processing 06- Mesh Generation"— Presentation transcript:

1 CENG 789 – Digital Geometry Processing 06- Mesh Generation
(Implicit Surfaces & Synthesis) Assoc. Prof. Yusuf Sahillioğlu Computer Eng. Dept, , Turkey

2 Explicit Representation
RECALL: Polygon meshes are piecewise linear surface representatins. Analogous to piecewise functions: Think surface as (the range of) a “shape” function. vs.

3 Explicit Surface Representation
For varying t, we get 2D shape (circle) points: t in [0, 2pi]. Parameterization f maps a 1D parameter domain, e.g., [0, 2pi], to the curve (surface) embedded in R2 (R3).

4 Implicit Surface Representation
Implicit or Volumetric surface representation. Surface is defined to be the zero-set of a scalar-valued function F. That is, we have , and then surface Circle of radius r: is defined by (x,y) pairs that make the right-hand-side 0

5 Explicit vs. Implicit Explicit: Implicit:
Range of a parameterization function. Implicit: Kernel of an implicit function.

6 Explicit vs. Implicit Explicit: for non-mathematical/arbitrary shapes (direct point) Range of a parameterization function. Piecewise approximation. Implicit: for non-mathematical/arbitrary shapes. Kernel of an implicit function. (zero-set)

7 Explicit vs. Implicit Explicit: Implicit:
Range of a parameterization function. Piecewise approximation. Splines, triangle meshes, points. Easy rendering. Easy geometry modification (new t = new sample). Implicit: Kernel of an implicit function. Scalar-valued 3D grid (mesh lies here). Easy in/out test (just evaluate F). Easy topology modification. Easy topology modification: structure of F, e.g., voxel grid, is independent from the topology of the level-set surface; hence we can easily change surface topology/connectivity.

8 Implicit Surfaces Level-set of a function defines the shape.
Level means function values are the same = at the same level. Traditionally, level is 0, hence 0-set of the function F is sought.

9 Implicit Surfaces Level-set of 2D function F(x, y) defines 1D curve.

10 Implicit Surfaces Level-set of 3D function F(x, y, z) defines 2D surface.

11 Implicit Function F Most common and natural representation is Signed Distance Function. Maps each 3D point x to its signed distance F(x) from the surface. Sign indicates inside/outside. F(x)=0 means x is on the surface. Other than SDF, continuous intensity function from CT/MRI scans are also popular. Here we will focus on the computation of SDF as it is a geometric problem. CT/MRI devices provide intensity information with their underlying radiologic technology.

12 Implicit Function F Most common and natural representation is Signed Distance Function. Maps each 3D point x to its signed distance F(x) from the surface. Sign indicates inside/outside. F(x)=0 means x is on the surface. Other than SDF, continuous intensity function from CT/MRI scans are also popular. Here we will focus on the computation of SDF as it is a geometric problem. CT/MRI devices provide intensity information with their underlying radiologic technology. Intensity: amount of vibration caused by magnets in MRI (no radiation). Intensity: x-rays (x radiation) are beamed & received along the tunnel in CT.

13 Implicit Function F Most common and natural representation is Signed Distance Function. Maps each 3D point x to its signed distance F(x) from the surface. Sign indicates inside/outside. F(x)=0 means x is on the surface. Another function for inside/outside test: Winding Numbers. W=2 See Wikipedia and `Robust Inside-Outside Segmentation using Generalized Winding Numbers’ paper for details on winding numbers.

14 Implicit Function F Most common and natural representation is Signed Distance Function. Maps each 3D point x to its signed distance F(x) from the surface. Sign indicates inside/outside. F(x)=0 means x is on the surface. Another function for inside/outside test: Winding/Turning Numbers. More robust than a signed distance function. Although inside, grid point g will be wrongly treated as outside as it is coupled w/ the closest sample, which yields a positive signed distance (see slide 27). No such problem w/ Winding Number test.

15 Surface Reconstruction
Given a set of 3D points, compare Explicit vs. Implicit reconstruction. Why is implicit better?

16 Surface Reconstruction
Given a set of 3D points, compare Explicit vs. Implicit reconstruction. Explicit reconstruction method: Connect sample points by triangles. Exact interpolation of sample points. Bad for noisy or misaligned data (common in scans). May lead to holes or non-manifoldness.

17 Surface Reconstruction
Given a set of 3D points, compare Explicit vs. Implicit reconstruction. Implicit reconstruction method: Estimate signed distance function (SDF) for each grid point (scalar-valued grid). Extract level zero iso-surface from this grid (Marching Cubes). Approximation of input points (better in noisy situations). Manifoldness by construction.

18 Implicit Reconstruction
Implicit function approach.

19 Implicit Reconstruction
Implicit function approach. Define a function F : R3  R with value <0 outside, >0 inside, =0 on.

20 Implicit Reconstruction
Implicit function approach. Extract the zero-set isosurface, i.e., S = {x in R3 | F(x) = 0 }.

21 Implicit Reconstruction
Implicit function: Signed distance function F : R3  R. Construction algorithm. Input: point samples representing the object to be reconstructed. Output: F value at each grid point, i.e., signed distance of that grid pnt to the sampled surface.

22 Implicit Reconstruction
So we need a grid that encapsulates the sampled surface. Uniform 2D grid. Mesh to be extracted: 1D (closed) curve  Uniform 3D grid. Mesh to be extracted: 2D (closed) surface 

23 Implicit Reconstruction
Mesh extraction idea. Marching Squares (2D), Marching Cubes (3D).

24 Implicit Reconstruction
Mesh extraction idea. Marching Squares (2D), Marching Cubes (3D).

25 Implicit Reconstruction
Steps for Marching Squares/Cubes Discretize space (use a regular grid enclosing the surface) Evaluate signed distance function on grid Classify grid points (inside/outside w.r.t. surface) Classify grid edges (one endpoint inside, one outside) Compute intersections Connect intersections

26 Implicit Reconstruction
Steps for Marching Squares/Cubes Discretize space (use a regular grid enclosing the surface) Evaluate signed distance function on grid Classify grid points (inside/outside w.r.t. surface) Classify grid edges (one endpoint inside, one outside) Compute intersections Connect intersections (ambiguities may arise) Handle by subsampling inside a cell, or randomly picking 1 of the 2 possibilities

27 Implicit Reconstruction
Back to the construction of the Signed Distance Function F : R3  R. Input: sample points. Output: F value at each grid point. Algo: Associate a tangent plane with each sample point. Why? Tangent planes are local linear approximations to the surface. Compute F(x) = (x - oi) ni

28 Implicit Reconstruction
Tangent plane for sample point s is found as follows: Find k-nearest neighbors of s. Apply principle component analysis on this set of close points (slide 65). Find the covariance matrix of the set. Take the eigenvectors of this matrix as the principal axes. Use the third best axis (w/ smallest eigenvalue) as the normal of the tangent plane, i.e. ni. Use the mean of this point set as the centering point of the plane, i.e., oi. The smaller the 3rd eigenvalue, the more confident the plane is, a is confident, b,c are not. ( ) Darker = more ambiguous

29 Implicit Reconstruction
k-nearest points. Brute force: O(k * N) when we have N samples. //103 steps Can be done in O(k * lgN) using Kd-trees. //10 steps. Kd-tree: cells are axis-aligned bounding boxes.

30 Implicit Reconstruction
Kd-tree insertion. k = # of dimensions = 2 in example below. Idea: every time you go down in tree, you toggle decision-making dimension.

31 Implicit Reconstruction
Kd-tree insertion. k = # of dimensions = 2 in example below. Idea: every time you go down in tree, you toggle decision-making dimension.

32 Implicit Reconstruction
Kd-tree search operation. Search (5, 2) below. 5 > 3 go right, 2 > 1 go right, 5 < 6 go left, 2 < 9 look left, not found

33 Implicit Reconstruction
Kd-tree partitioning. Going left or right means partitioning the space. (3,7) inserted (8,1) inserted (6,6) right of the 3-line, top of the 1-line: partition the space everything <= 1 into 2 axis-aligned everything > 1 bounding boxes, i.e., (w.r.t. y dimension) everything <= 3 to left everything > 3 to right.

34 Implicit Reconstruction
Kd-tree partitioning. Going left or right means partitioning the space. (2,6) inserted (1,7) inserted (8,6) inserted (5,9) inserted

35 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point (see Note below). Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Our algo should return point 5 (as the nearest one to the query). Algo: + Check distance from point in node to query point. + Recursively search left/bottom (if it could contain a closer point) (= cut out). + Recursively search right/top (if it could contain a closer point) (= cut out). + Organize method so that it begins by searching for a query point (= going towards query).

36 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Go towards query point, i.e., left. (there might be a closer pnt than 1 in the right; at least we think so now).

37 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. When recursion gets back to pnt 1, we won’t search the right subtree ‘cos there could be no point closer to query than 3. //cutting out 

38 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Going towards the query point, so checking the top of 3. Pnt 6 is not better so do not update the champion.

39 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Check left subtree of pnt 6 (going towards qury), empty, so do nothing.

40 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Do not check the right subtree of pnt 6 ‘cos they can’t be closer than 3.

41 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Recursion checks the bottom subtree of pnt 3, which takes us to pnt 4, which is not closer; so 3 is still the champion.

42 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Check left subtree of pnt 4 (towards query), where we have pnt 5, the new champion.

43 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Check top of pnt 5 (no pnts there), do nothing.

44 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Check bottom of pnt 5 (no pnts there), do nothing.

45 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Recursion comes back to the right subtree of pnt 4, but cut it out (no search) ‘cos there can’t be a closer pnt there than pnt 5.

46 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Recursion comes back to pnt 3, whose both subtrees are handled.

47 Implicit Reconstruction
Kd-tree nearest-neighbor to a query point. Idea: As we get closer to the query point, we are cutting out all the subtrees that are away. Recursion comes back to pnt 1, right subtree is next, but cut it out (no search) ‘cos there can’t be a closer pnt there than pnt 5.

48 Implicit Reconstruction
We efficiently (kd-tree) computed signed distance function F. The only thing left is to find the intersection point on the square/cube edge: Square: Cube:

49 Implicit Reconstruction
We efficiently (kd-tree) computed signed distance function F. The only thing left is to find the intersection point on the square/cube edge: Square: Cube:

50 Implicit Reconstruction
We efficiently (kd-tree) computed signed distance function F. The only thing left is to find the intersection point x on the square/cube edge: F = F0 * (1-u) + F1 * u  for F=0, u = F0 / (F0 – F1) //F is linear on edges. x = x0 + u * (x1 – x0)  use u computed above and locate x 

51 Shape Synthesis So far we dealt with generation of surfaces/meshes from implicit scalar fields. One can also generate meshes (= synthesize shapes) by growing an existing set of seed meshes.

52 Part-based Shape Synthesis
One of the advantages of using digital models is the simplicity to generate new models based on the seed models. One method is shuffling and deforming existing parts in correspondence. Genetic algo: crossovers (shuffling) and mutations (deformations). Paper: Set Evolution for Inspiring 3D Shape Galleries

53 PCA-based Shape Synthesis
One of the advantages of using digital models is the simplicity to generate new models based on the seed models. We will learn a different approach: learning shape space via PCA.

54 PCA-based Shape Synthesis
One of the advantages of using digital models is the simplicity to generate new models based on the seed models. We will learn a different approach: learning shape space via PCA.

55 PCA-based Shape Synthesis
One of the advantages of using digital models is the simplicity to generate new models based on the seed models. Having learned the variation below, synthesize the next mesh.

56 PCA-based Shape Synthesis
One of the advantages of using digital models is the simplicity to generate new models based on the seed models. How about this variation?

57 PCA PCA: Principal Component Analysis.
PCA: given a point* set, PCA finds a good orthogonal basis that represents these points well. PCA: given a point set, PCA finds a good orthogonal basis that is well-aligned with data. PCA: discovers directions among which data has big variation. * Note that points may represent shapes as well, i.e. in R12500

58 PCA Applications Line fit, plane fit. Dimension reduction.
Oriented bounding box (OBB) computation. This is axis aligned bounding box (AABB) of the same point set.

59 PCA Applications Line fit, plane fit. Dimension reduction.
Oriented bounding box (OBB) computation. Dot products to get p/q min/max.

60 PCA Computation Compute center of mass m of n data points x1 to xn.
Translate data points so that origin is at m: yi = xi – m for 1≤i≤n. Given matrix compute scatter (covariance) matrix S = YYT measuring the variance of points in different directions. Eigenvectors of S (in the descending order of associated eigenvalues) give the desired principal directions.

61 Variance vs. Covariance
Variance represents the spread of the data points: Variance is the square of the standard deviation . If data is normally distributed, 68.3% of the observations/samples will have a value b/w and , where is the mean of the data pts. Hence, normally distributed data can be characterized by mean and variance. Here is a probability density function for normally distributed data:

62 Variance vs. Covariance
Variance represents the spread of the data points in axis-aligned directions, e.g., x-direction ( ) or y-direction ( ) for 2D data. However, horizontal and vertical spreads may not be enough, e.g., they do not explain the clear diagonal correlation below: On average, if x increases, so does y; hence a positive correlation captured by , and summarized in this covariance matrix: Symmetric matrix w/ variances on diagonal, covariances off-diagonal.

63 Variance vs. Covariance
2D data is fully explained by its mean and 2x2 covariance matrix. 3D data is fully explained by its mean and 3x3 covariance matrix.

64 Variance vs. Covariance
2D data is fully explained by its mean and 2x2 covariance matrix. 3D data is fully explained by its mean and 3x3 covariance matrix. Example: variance b/w x-coordinates = .27, y-coordinates = .25, and covariance b/w x- and y-coordinates = -.26 here: (no variance between z-coordinates).

65 Eigendecomposition of Covariance Matrix
Recall that eigenvectors of the covariance matrix S give the desired principal directions of variations, i.e., the principal components. Eigenvalues represent the variance of data along the corresponding eigenvector directions; so pick the components w.r.t. eigenvalues. In 3D, all eigenvalues similar  no preferred direction, i.e., sphere. In 3D, one tiny eigenvalue  data sampled on a plane whose normal is that tiny eigenvalue (this is how we fit planes to 3D data). In 2D, all eigenvalues similar  circle. In 2D, one tiny eigenvalue  line whose direction is the big eigenvalue (this is how we fit lines to 2D data).

66 Eigendecomposition of Covariance Matrix
Recall that eigenvectors of the covariance matrix S give the desired principal directions of variations, i.e., the principal components. Why? Spectral Theorem: A square symmetric matrix M (just like our S), can be diagonalized by choosing a new orthogonal coordinate sys, given by its eigenvectors. Corresponding eigenvalues‘ll form the diagonal matrix. In this new coordinate system, M will be diagonal. Replace M with S. Obtaining a diagonal covariance matrix is what we want ‘cos it zeros the correlation between points (only variances remain in the diagonal). Diagonalization of symmetric S made possible via eigendecomposition.

67 PCA Summary PCA finds the best possible characteristics, best summary of data. Best: maximum variance/spread occurs along the principal direction. Best: minimum error (total red line length) along the principal directin.

68 PCA Summary Why does max variance give min error, or vice versa?
Variance: average squared distance from center to each red dot. Error: average squared red line lengths. Average squared distance from center to each blue dot is constant. Constant = Variance + Error by Pythagoras Theorem. So the higher the variance the lower the error, viceversa (to keep the sum constant).

69 PCA Summary PCA constructs this cool principal direction which provides the max variance, min error (and the other orthonormal directions, which are the next principals).

70 Linear Algebra w/ Geometric Perspective
For a geometric interpretation of eigenstuff, let’s visualize what matrices do: they represent linear transformations. v’ = Mv is the linearly transformed version of v. Rotation, scaling, reflection, shear are common linear transforms. A transformation is linear if lines remain lines & origin remains fixed Translation is not linear as it moves the origin. Hence no translation matrix in standard coordinates. We use homogenous coordinates to make translation linear, and therefore representable by a matrix. Once translation is also a matrix, we can apply consecutive transformations in the same matrix multiplication pipeline. That’s why we use homogenous coordinates in Computer Graphics.

71 Basis For a geometric interpretation of eigenstuff, let’s visualize what matrices do: they represent linear transformations. v’ = Mv is the linearly transformed version of v. Rotation, scaling, reflection, shear are common linear transforms. A transformation is linear if lines remain lines & origin remains fixed Any matrix can be analyzed by studying its operation on the basis vectors of the space, e.g., standard basis: i=(1,0,0), j=(0,1,0),k=0,0,1 Basis: linearly independent set of vectors that span the vector space. Any vector in vector space (white) can be represented as a linear combination of basis vectors (purple set, or red set). A good basis often consists of orthogonal vectors, e.g., i, j, k. Orthogonality implies linear independence but not vice versa. Orthonormality: 2 vectors are orthogonal and unit (length=1).

72 Basis For a geometric interpretation of eigenstuff, let’s visualize what matrices do: they represent linear transformations. v’ = Mv is the linearly transformed version of v. Rotation, scaling, reflection, shear are common linear transforms. A transformation is linear if lines remain lines & origin remains fixed Any matrix can be analyzed by studying its operation on the basis vectors of the space, e.g., standard basis: i=(1,0,0), j=(0,1,0),k=0,0,1 Basis: linearly independent set of vectors that span the vector space. Any vector in vector space (white) can be represented as a linear combination of basis vectors (purple set, or red set). A good basis often consists of orthogonal vectors, e.g., i, j, k. Orthogonality implies linear independence but not vice versa. Orthonormality: 2 vectors are orthogonal and unit (length=1).

73 Matrix For a geometric interpretation of eigenstuff, let’s visualize what matrices do: they represent linear transformations. v’ = Mv is the linearly transformed version of v. M: linear operator. Rotation, scaling, reflection, shear are common linear transforms. A transformation is linear if lines remain lines & origin remains fixed Any matrix can be analyzed by studying its operation on the basis vectors of the space, e.g., standard basis: i=(1,0,0), j=(0,1,0),k=0,0,1 Columns of M are the landing points of basis vectors of the space.

74 Matrix For a geometric interpretation of eigenstuff, let’s visualize what matrices do: they represent linear transformations. v’ = Mv is the linearly transformed version of v. M: linear operator. Rotation, scaling, reflection, shear are common linear transforms. A transformation is linear if lines remain lines & origin remains fixed Any matrix can be analyzed by studying its operation on the basis vectors of the space, e.g., standard basis: i=(1,0,0), j=(0,1,0),k=0,0,1 Columns of M are the landing points of basis vectors of the space. det(M) = 6 is the scale factor by which areas are transformed by M.

75 Matrix For a geometric interpretation of eigenstuff, let’s visualize what matrices do: they represent linear transformations. v’ = Mv is the linearly transformed version of v. M: linear operator. Rotation, scaling, reflection, shear are common linear transforms. A transformation is linear if lines remain lines & origin remains fixed Any matrix can be analyzed by studying its operation on the basis vectors of the space, e.g., standard basis: i=(1,0,0), j=(0,1,0),k=0,0,1 Columns of M are the landing points of basis vectors of the space.

76 Eigenvectors and Eigenvalues
Eigenvector of a transformation M is a vector that has not changed at all after transformation M except by a scalar known as the eigenvalue. Mv = λv, where v is an eigenvector of M w/ the eigenvalue λ. Eigendecomposition: M = VΛVT (aka Spectral Decomposition).

77 Eigenvectors and Eigenvalues
Eigenvector of a transformation M is a vector that has not changed at all after transformation M except by a scalar known as the eigenvalue. Mv = λv, where v is an eigenvector of M w/ the eigenvalue λ. 2 eigenvectors for this M: [1 0]T and [0 1]T w/ eigvals 3 and 2, resptvly Any vector in eigvector direction, e.g., [9 0]T, is still an eigenvector w/ the same eigenvalue; ordinarily we consider eigenvectors as unit vectors. = 3 = 2

78 Eigenvectors and Eigenvalues
Eigenvector of a transformation M is a vector that has not changed at all after transformation M except by a scalar known as the eigenvalue. Mv = λv, where v is an eigenvector of M w/ the eigenvalue λ. Uniform scale by 2 has all vectors as eigvectors but one eigvalue of 2. Eigenvalue can be visualized as stretch amount of the original vector. = 2 = 2

79 Eigenvectors and Eigenvalues
Eigenvector of a transformation M is a vector that has not changed at all after transformation M except by a scalar known as the eigenvalue. Mv = λv, where v is an eigenvector of M w/ the eigenvalue λ. 2 eigenvectors for this M: [1 0]T and [-1 1]T w/ eigvls 3 and 2, resptvly = 3 = 2

80 Eigenvectors and Eigenvalues
Eigenvector of a transformation M is a vector that has not changed at all after transformation M except by a scalar known as the eigenvalue. Mv = λv, where v is an eigenvector of M w/ the eigenvalue λ. No eigenvectors for this M: (2D rotation has no eigvecs ‘cos all vectors change direction.)

81 Eigenvectors and Eigenvalues
Eigenvector of a transformation M is a vector that has not changed at all after transformation M except by a scalar known as the eigenvalue. Mv = λv, where v is an eigenvector of M w/ the eigenvalue λ. An eigenvector of 3D rotation (3x3 M) is the axis of rotation, as it stays on its span. Corresponding eigenvalue is 1 as rotations don’t stretch/squish objects. Eigenvector M of M (pink):

82 PCA-based Shape Synthesis
Back to business after that algebraic detour. One of the advantages of using digital models is the simplicity to generate new models based on the seed models. Having learned the variation below, synthesize the next mesh.

83 PCA-based Shape Synthesis
Learn the shape space through PCA. Correlate semantic attributes and PCA coordinates. Create new shape based on new attributes. No fancy devices/scanners, just simple keyboard inputs.

84 PCA Computation Compute center of mass m of n data points x1 to xn.
Translate data points so that origin is at m: yi = xi – m for 1≤i≤n. Given matrix compute scatter (covariance) matrix S = YYT measuring the variance of points in different directions. Eigenvectors of S (in the descending order of associated eigenvalues) give the desired principal directions.

85 PCA Computation Compute center of mass m of n shapes x1 to xn. m: mean shape. Translate shapes so that origin is at m: yi = xi – m for 1≤i≤n. Given matrix compute scatter (covariance) matrix S = YYT measuring the variance of points in different directions. Eigenvectors of S (in the descending order of associated eigenvalues) give the desired principal directions. The set of eigenvectors representing surface data in this manner is called an active shape model, or statistical shape model.

86 Active Shape Model The set of eigenvectors representing surface data in this manner is called an active shape model, or statistical shape model. m: mean shape. Linear model: x = m + Pb We don’t lose much by approximating the new shape x using few dimensions.

87 PCA Computation How many principal components do we need?
We want to capture as much variation as possible. Eigenvalue λi gives the amount of variation in the ith direction. One heuristic: discard components where ri is below, say, 1%.

88 PCA Computation S = YYT is huge (3V x 3V)  difficult to compute eigenvectors. Since there are only n samples (n << 3V) the rank of S is at most n-1 and therefore there are at most n-1 eigvectors w/ nonzero eigvalues. Karhunen-Loeve Transform (KLT) does the trick.

89 PCA Computation S = YYT is huge (3V x 3V)  difficult to compute eigenvectors. Since there are only n samples (n << 3V) the rank of S is at most n-1 and therefore there are at most n-1 eigvectors w/ nonzero eigvalues. Karhunen-Loeve Transform (KLT) does the trick: Instead of computing the PCA of YYT, compute PCA of YTY (n x n). Denote by pi the eigenvectors of huge YYT, and by qi the eigvctors of small YTY, we have YTYqi = λiqi Multiply sides by Y  YYT(Yqi) = Yλiqi = λi(Yqi) Thus pi = Yqi are the eigenvectors of YYT This means that we can compute eigenvectors of small YTY and multiply them by Y to get the desired eigenvectors pi. Note that each pi is of size 3V x 1 and they are orthonormal.

90 Correlation PCA constructs the principal axes of variations.

91 Correlation PCA constructs the principal axes of variations, which enables a low dimensional representation to be correlated w/ semantic attributes.

92 Correlation For shape xi, correlation between semantically meaningful input data si = [kg, cm, ..] and shape’s PCA coordinates bij is needed. Training: si is defined/precomputed for each shape xi where 1≤i≤n. Learn correlation matrix C via linear mapping model. More sophisticated learning by: Shared Gaussian Process Latent Variable Model. b = C si where b = [bi1 bi2 ..] for xi (recall si = [kg, cm, ..]). We get n different C matrices which are then averaged into C. Synthesize a new shape w/ the desired semantic attributes snew via: bnew = C snew

93 Correlation Synthesis based on the new PCA coordinates bnew is as follows: xnew = m; for each principal direction pj in R3V for each dimension d in bnew x += pj.d * bnew.d (where bnew.d gives the dth component of bnew). Right, assume p1 represents variation in height (y-coord in mesh) and p2 weight (z-coord). If bnew.1 = bnew.2 = 0, no change in mean shape. If bnew.1 = 0, bnew.2 = 1, then xnew is not stretched vertically (no change in y). Mean moves along z on some vertices (change in weight).

94 PCA for Face Recognition
Face images are highly redundant: all background pixels are the same, and each subject has the same facial features. Use eigenvectors (called eigenfaces in this context) to represent images compactly. Use this compact representatin to recognize (or create) faces efficiently.

95 PCA for Face Recognition
Use eigenvectors (called eigenfaces in this context) to represent images compactly, e.g., below: just 3 coefficients (weights on 3 principal axes). Use this compact representation, e.g., 14 images (X) below, to recognize faces efficiently.

96 PCA for Face Recognition
Mean: Four eigenfaces with the largest eigenvalues:

97 PCA for Face Recognition
3 principal components are still a close representation.

98 PCA for Face Recognition
5 principal components are still a close representation. Typically use 40.

99 Potential Project Topic
This shape synthesis work learns a continuous mapping from semantic attributes to geometry through crowdsourcing. Paper: Semantic Shape Editing Using Deformation Handles.

100 Potential Project Topic
Representation of all major modes of pose variations via PCA. Paper: Efficient and Flexible Deformation Representation for Data-Driven Surface Modeling.

101 Potential Project Topic
Representation of all major modes of face variations via PCA. Paper: A Morphable Model For The Synthesis Of 3D Faces. //FaceGen See: or youtu.be/KPDfMpuK2fQ blendshapes Paper: A 3D Morphable Model learnt from 10,000 faces. //more recent


Download ppt "CENG 789 – Digital Geometry Processing 06- Mesh Generation"

Similar presentations


Ads by Google