Presentation is loading. Please wait.

Presentation is loading. Please wait.

Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.

Similar presentations


Presentation on theme: "Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics."— Presentation transcript:

1 Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics Research for Development of Knowledge Society Infrastructure (ICKS’04) 1

2 Outline Introduction 3D Video Generation Deformable Mesh Model Texture Mapping Algorithm Editing and Visualization System Conclusion 2

3 Introduction PC cluster for real-time active 3D object shape reconstruction 3

4 3D Video Generation Synchronized Multi- View Image Acquisition Silhouette Extraction Silhouette Volume Intersection Surface Shape Computation Texture Mapping 4

5 Silhouette Volume Intersection 5

6 Plane-to-Plane Perspective Projection – 3D voxel space is partitioned into a group of parallel planes 6

7 Plane-to-Plane Perspective Projection 1.Project the object silhouette observed by each camera onto a common base plane 2.Project each base plane silhouette onto the other parallel planes 3.Compute 2D intersection of all silhouettes projected on each plane 7

8 Linearized Plane-to-Plane Perspective Projection(LPPPP) algorithm 8

9 Parallel Pipeline Processing on a PC cluster system Silhouette Extraction Projection to the Base-Plane Base-Plane Silhouette Duplication Object Cross Section Computation Without camera With camera 9

10 Parallel Pipeline Processing on a PC cluster system average computation time for each pipeline stage 10

11 3D Video Generation Synchronized Multi- View Image Acquisition Silhouette Extraction Silhouette Volume Intersection Surface Shape Computation Texture Mapping 11

12 Deformable Mesh Model Dynamic 3D Shape reconstruction – Reconstruct 3D shape for each frame – Estimate 3D motion by establishing correspondences between frames t and t+1 Constraint – Photometric constraint – Silhouette constraint – Smoothness constraint – 3D motion flow constraint – Inertia constraint Intra-frame deformation Inter-frame deformation 12

13 Intra-frame deformation step 1 Convert the voxel representation into a triangle mesh. [1] step 2 Deform the mesh iteratively: – step 2.1 Compute force acting on each vertex – step 2.2 Move each vertex according to the force. – step 2.3 Terminate if all vertex motions are small enough. Otherwise go back to 2.1. [1] Y. Kenmochi, K. Kotani, and A. Imiya. Marching cubes method with connectivity. In Proc. of 1999 International Conference on Image Processing, pages 361–365, Kobe, Japan, Oct. 1999. 13

14 Intra-frame deformation External Force: – satisfy the photometric constraint 14

15 Intra-frame deformation Internal Force: – smoothness constrain Silhouette Preserving Force: – Silhouette constrain Overall Vertex Force: 15

16 Performance Evaluation 16

17 Dynamic Shape Recovery Inter-frame deformation – A model at time t deforms its shape to satisfy the constraints at time t+1, we can obtain the shape at t+1 and the motion from t to t+1 simultaneously. 17

18 Dynamic Shape Recovery Define 、 、 Drift Force: – 3D Motion flow constraint Inertia Force: – Inertia constraint Overall Vertex Force: 18

19 Dynamic Shape Recovery 19

20 3D Video Generation Synchronized Multi- View Image Acquisition Silhouette Extraction Silhouette Volume Intersection Surface Shape Computation Texture Mapping 20

21 Viewpoint Independent Patch-Based Method Select the most “appropriate” camera for each patch 1.For each patch p i 2.Compute the locally averaged normal vector V lmn using normals of p i and its neighboring patches. 3.For each camera c j, compute viewline vector V cj directing toward the centroid of p i. 4.Select such camera c* that the angle between V lmn and V cj becomes maximum. 5.Extract the texture of p i from the image captured by camera c*. 21

22 Viewpoint Dependent Vertex-Based Texture Mapping Algorithm Parameters – c: camera – p: patch – n: normal vector – I: RGB value 22

23 Viewpoint Dependent Vertex-Based Texture Mapping Algorithm A depth buffer of c j : B cj – Record patch ID and distance to that patch from c j 23

24 Viewpoint Dependent Vertex-Based Texture Mapping Algorithm Visible Vertex from Camera c j : – The face of p i can be observed from camera c j – Project visible patches onto B c j – Check the visibility of each vertex using the buffer 24

25 Viewpoint Dependent Vertex-Based Texture Mapping Algorithm 1.Compute RGB values of all vertices visible from each camera 2.Specify the viewpoint eye 3.For each patch p i, do 4 to 9 4.If visible ( ), do 5 to 9 5.Compute weight 6.For each vertex of patch p i, do 7 to 8 7.Compute the normalized weight 8.Compute RGB value 9.Generate the texture of patch p i by linearly interpolating RGB values of its vertices 25

26 Performance Viewpoint Independent Patch-Based Method (VIPBM) Viewpoint Dependent Vertex-Based Texture Mapping Algorithm (VDVBM) – VDVBM-1 : including real images captured by camera c j itself – VBVBM-2 : excluding real images Mesh : converted from voxel data D-Mesh : after deformation 26

27 Performance 27

28 Performance Frame number 28

29 Editing and Visualization System Methods to generate camera-works – Key Frame Method – Automatic Camera-Work Generation Method Virtual Scene Setup – Virtual camera – Background – object 29

30 Key Frame Method Specify the parameters (positions, rotations of a virtual camera, object, etc.) for arbitrary key frames 30

31 Automatic Camera-Work Generation Method Object’s parameters (standing human) – Position, Height, Direction User has only to specify 1)Framing of a picture 2)Appearance of the object from the virtual camera We can compute virtual camera parameters – Distance between virtual camera and object d – Position of the virtual camera (x c, y c, z c ) 31

32 Automatic Camera-Work Generation Method Distance between virtual camera and object d Position of the virtual camera (x c, y c, z c ) 32

33 Conclusion A PC cluster system with distributed active cameras for real-time 3D shape reconstruction – Plane-based volume intersection method – Plane-to-Plane Perspective Projection algorithm – Parallel pipeline processing A dynamic 3D mesh deformation method for obtaining accurate 3D object shape A texture mapping algorithm for high fidelity visualization A user friendly 3D video editing system. 33

34 Reference 1)T. Matsuyama and T. Takai. “Generation, visualization, and editing of 3d video.” In Proc. of symposium on 3D Data Processing Visualization and Transmission, pages 234–245, Padova, Italy, June 2002. 2)T. Matsuyama, X. Wu, T. Takai, and T. Wada. “Real-time dynamic 3d object shape reconstruction and high-fidelity texture mapping for 3d video.” IEEE Trans. on Circuit and Systems for Video Technology, pages 357–369, 2004. 3)T. Wada, X. Wu, S. Tokai, and T. Matsuyama. “Homography based parallel volume intersection: Toward real-time reconstruction using active camera.” In Proc. of International Workshop on Computer Architectures for Machine Perception, pages 331–339, Padova, Italy, Sept. 2000. 34


Download ppt "Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics."

Similar presentations


Ads by Google