Presentation is loading. Please wait.

Presentation is loading. Please wait.

Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho.

Similar presentations


Presentation on theme: "Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho."— Presentation transcript:

1 Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho

2 Eigen-Texture Method Appearance Compression based on 3D modeling  Authors Ko Nishino, Yochihi Sato, Katsushi Ikuechi (CVPR 99)  http://www.cvl.iis.u-tokyo.ac.jp/~kon/eigen- texture/index.html http://www.cvl.iis.u-tokyo.ac.jp/~kon/eigen- texture/index.html

3 Appearance based method Inputs : 3D Geometry and a set of color images Outputs : Synthesized images from arbitrary viewing points. Objective : Compressed representation of appearance of model

4 Eigen-Texture method Overview encoding decoding sampling

5 Sampling  3D geometry is captured by range camera.  Photos are taken by rotating the object and registered into the geometry.  Each color image is divided into small areas according to their corresponding triangles in the object.  Cell : Normalized triangle patches.  Cell image : Warped color image  Compression is done with sequences of cell images of each cell.

6 Sampling θ 357° appearance change of one patch θ 0° Image coherence makes high compression ratio and interpolation of appearances.

7 Compression key idea  Instead of storing whole sequence of images, we find a small set of new cell images (eigen cells), then represent each cell image as a linear combination of those eigen cells.  Then we store only those eigen cells and the coefficients of each cells.

8 At a glance… Original data Using principal vector Size = 2*N Size = N + 2

9 Matrix construction  Single cell image  Sequence of cell images M : # of images N : # of pixels in each cell n n

10 Eigen method K eigen vectors

11 Eigen ratio  Sort the eigen values and pick k largest eigen values.  We need all eigen values if we want to preserve original, but we need compression.

12 Decoding with eigen vectors scores k eigen-cells a0×a0×+ a 1 ×+ a 3 × = linear combination of base images synthesized view

13 Compression ratio  Size of a sequence of cell images : M x 3N  Size of k-eigen cells : k x 3N  “coefficients” of image cells : k x M  Therefore,

14 Cell-adaptive dimensions  Several factors influence the required dimension - Specularity, mesh precision, shadowing…  Since we compress for each sequence of cell images, we can use different dimension for each cells.  This can be done by using fixed threshold eigen ratio.

15 Interpolation  Synthesize from novel view point is done in the eigen-space by interpolating “coefficients” (or scores)

16 Integrating into the real scene  Can render scene under arbitrary illumination condition  Sample color images under several single light conditions  Synthesize the scene under approximate arbitrary illumination condition by linear combination of those base images

17

18 Shadowing

19 Discussion  Contribution - high compression ratio - interpolation in eigen space - global illumination effects  Drawbacks - expensive pre-computation time - limited positions. - dense mesh?

20 Surface Light Fields for 3D photography  Authors Daniel N. Wood, et al. (SIGGRAPH 2000)  http://www.cs.washington.edu/research/graphics /projects/slf/ http://www.cs.washington.edu/research/graphics /projects/slf/

21 Surface Light Field  Surface Light Field is a function that assigns a color to each ray originating from every point on the surface.  Conceptually, every points in the surface has it’s corresponding lumisphere..

22 Overview Data Acquisition Estimation And Compression Rendering Editing

23 Overview Data Acquisition Estimation And Compression Rendering Editing

24 Data Acquisition  Range scanning  Reconstructing the geometry  Acquiring the photographs  Registering the photographs to the geometry

25 Scan and reconstruct Use closest points algorithm & volumetric method by Curless and Levoy

26 Acquiring photographs Used Stanford spherical gantry with a calibrated camera

27 Register photographs to the geometry Manually selected correspondences

28 How to represent?  MAPS ( Multiresolution Adaptive Parameterization of Surfaces) - Base mesh + wavelets - (Aaron Lee, et al. SIGGRAPH ‘98) Base meshOriginal mesh map

29 Data lumishpere in each points Lumisphere Base mesh Scanned mesh

30 Overview Data Acquisition Estimation And Compression Rendering Editing

31 Estimation and Compression  Estimation problem : How do we find piece-wise linear lumisphere from given data lumisphere?  Three methods - Pointwise- fairing - Function quantization - Principal function analysis

32 Point-wise fairing  Estimate the least-squares best approximating lumisphere for individual surface points.  Error function = distance term + thin plate energy term  Results high quality but suffers large file size. – needs compression technique.

33 Point-wise fairing

34

35 Compression  Don’t want each grid point to have its own lumisphere.  Rather, want a small set of lumispheres that can be used to nicely approximate all the data lumisphere.  Standard techniques : - Vector quantization - Principal function analysis

36 Compression in point-wise fairing  Compression in the point-wise fairing method. - vector quantization or principal component analysis directly on their results.  Not a good idea because, - we’ve already had re-sampling step and many parts of they are fiction!  So directly manipulate data lumispheres.

37 Two pre-processing  Two transformation are applied to make them more compressible. - Median removal - Reflected re-parameterization

38 Median removal

39 Reflected re-parameterization

40

41 Effect of reflection Before After

42 Codebook of lumispheres Input data lumisphere Function quantization

43  Lloyd iteration : Start with initial single codeword and a random set of training lumispheres, then repeatedly - Split and perturb codebook and repeatedly apply - Projection : find closest codeword index. - Optimization : for each cluster, find best piece-wise lumisphrer until error difference is less than some threshold. Until desirable size.

44 Lloyd iteration Codeword

45 Lloyd iteration Clone and perturb code words

46 Lloyd iteration Divided by several clusters

47 Lloyd iteration Optimize code words in the cluster

48 Lloyd iteration New clusters

49 Principal function analysis  Generalization of principal component analysis  Again we find a set of code words (prototypes), but instead of assign to each grid point, we approximate with linear combination of prototypes.

50 Principal function analysis Subspace of lumispheres Input data lumisphere Prototype lumisphere

51 Principal function analysis Approximating subspace Prototype lumisphere

52 Principal function analysis

53

54 Compression result Point-wise fairing FQ with 1024 codeword PFA with 2 dimension PFA with 5 dimension

55 Overview Data Acquisition Estimation And Compression Rendering Editing

56 Rendering  Basic algorithm 1. First pass : - render geometry in false color. - encode face ID and barycentric coordinates of each pixel. 2. Second pass : - scan the frame buffer and evaluate SLF with view direction.

57 Rendering First passSecond pass

58 View dependent rendering  View dependent LOD - Hoppe, et al. - Basic principle : More subdivide the “important” parts. (Add wavelets) - Three criteria 1. View frustum 2. Surface orientation 3. Screen space geometric error

59 Example - view dependent LOD

60 Overview Data Acquisition Estimation And Compression Rendering Editing

61  Thanks to the decoupling of geometry and the light fields.  Three operations : - Lumisphere editing - Rotating environment - Deforming geometry

62 Editing - lumisphere editing  Appling simple image processing on lumisphere OriginalFiltered

63 Editing - rotating environment  Rotating the lumisphere OriginalRotated

64 Editing - deforming the geometry  Embed modified base mesh.  Compute new normals and set OriginalDeformed

65 Editing - limitation  It’s not always physically correct!  More correct if - environment is infinitely far away - no occlusion, shadowing or interreflection  One more problem : Our lumisphere sampling is not “complete sphere”. - needs some inference  Even though, it looks nice.

66 Some statistics - compression Point-wise fairing: Memory = 177 MBRMS error = 9 FQ (2000 codewords) Memory = 3.4 MBRMS error = 23 PFA (dimension 3) Memory = 2.5 MBRMS error = 24 PFA (dimension 5) Memory = 2.9 MBRMS error = ?

67 Some statistics - time Compute times on ~450 MHz P-III… Range scanning time: 3 hours Geometry registration: 2 hours Image to geometry alignment: 6 hours MAPS (sub-optimal): 5 hours Assembling data lumispheres: 24 hours Pointwise fairing: 30 minutes FQ codebook construction (10%): 30 hours FQ encoding: 4 hours PFA “codebook” construction (0.1%): 20 hours PFA encoding: 2 hours

68 Bench mark Pointwise-faired surface light field (177 MB) Uncompressed 2-plane light field (177 MB)

69 Bench mark Principal function analysis surface light field (2.5 MB) Vector-quantized 2-plane light field (8.1 MB)

70 Summary  Estimation and compression - Function quantization - Principal function analysis  Rendering - From compressed representation - View dependent LOD  Editing - Limigraph filtering and rotation - Geometry deformation

71 Future  Combining function quantization and principal function analysis  Wavelet representation of a surface light field  Hole filling using texture synthesis


Download ppt "Eigen Texture Method : Appearance compression based method Surface Light Fields for 3D photography Presented by Youngihn Kho."

Similar presentations


Ads by Google