Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics.

Slides:



Advertisements
Similar presentations
Gestures Recognition. Image acquisition Image acquisition at BBC R&D studios in London using eight different viewpoints. Sequence frame-by-frame segmentation.
Advertisements

3D Model Matching with Viewpoint-Invariant Patches(VIP) Reporter :鄒嘉恆 Date : 10/06/2009.
The fundamental matrix F
CSE473/573 – Stereo and Multiple View Geometry
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Exploration of bump, parallax, relief and displacement mapping
Three Dimensional Viewing
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
EVENTS: INRIA Work Review Nov 18 th, Madrid.
Shape from Contours and Multiple Stereo A Hierarchical, Mesh-Based Approach Hendrik Kück, Wolfgang Heidrich, Christian Vogelgsang.
Optimizing Content-Preserving Projections for Wide-Angle Images ACM SIGGRAPH 2009 Robert Carroll (University of California, Berkeley) Maneesh Agrawal (University.
Constructing immersive virtual space for HAI with photos Shingo Mori Yoshimasa Ohmoto Toyoaki Nishida Graduate School of Informatics Kyoto University GrC2011.
HCI 530 : Seminar (HCI) Damian Schofield.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
CS6500 Adv. Computer Graphics © Chun-Fa Chang, Spring 2003 Object-Order vs. Screen-Order Rendering April 24, 2003.
1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
CS6500 Adv. Computer Graphics © Chun-Fa Chang, Spring 2003 Adv. Computer Graphics CS6500, Spring 2003.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Multi-view stereo Many slides adapted from S. Seitz.
Image Morphing : Rendering and Image Processing Alexei Efros.
Global Illumination May 7, Global Effects translucent surface shadow multiple reflection.
Bernd Girod: Image Compression and Graphics 1 Image Compression and Graphics: More Than a Sum of Parts? Bernd Girod Collaborators: Peter Eisert, Marcus.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
CSE473/573 – Stereo Correspondence
Fluid Surface Rendering in CUDA Andrei Monteiro Marcelo Gattass Assignment 4 June 2010.
Part I: Basics of Computer Graphics Rendering Polygonal Objects (Read Chapter 1 of Advanced Animation and Rendering Techniques) Chapter
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
I mage and M edia U nderstanding L aboratory for Performance Evaluation of Vision-based Real-time Motion Capture Naoto Date, Hiromasa Yoshimoto, Daisaku.
Review: Binocular stereo If necessary, rectify the two stereo images to transform epipolar lines into scanlines For each pixel x in the first image Find.
Computer Graphics Shadows
Research & Innovation 1 An Industry Perspective on VVG Research Oliver Grau BBC Research & Innovation VVG SUMMER SCHOOL '07.
Modeling and representation 1 – comparative review and polygon mesh models 2.1 Introduction 2.2 Polygonal representation of three-dimensional objects 2.3.
Image Morphing CSC320: Introduction to Visual Computing
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
1/20 Obtaining Shape from Scanning Electron Microscope Using Hopfield Neural Network Yuji Iwahori 1, Haruki Kawanaka 1, Shinji Fukui 2 and Kenji Funahashi.
KinectFusion : Real-Time Dense Surface Mapping and Tracking IEEE International Symposium on Mixed and Augmented Reality 2011 Science and Technology Proceedings.
3D SLAM for Omni-directional Camera
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS 1 Shading (Shading) & Smooth Shading Graphics.
Marching Cubes: A High Resolution 3D Surface Construction Algorithm William E. Lorenson Harvey E. Cline General Electric Company Corporate Research and.
Graphics Systems and OpenGL. Business of Generating Images Images are made up of pixels.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
1 Interactive Thickness Visualization of Articular Cartilage Author :Matej Mlejnek, Anna Vilanova,Meister Eduard GröllerMatej MlejnekAnna VilanovaMeister.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
12/24/2015 A.Aruna/Assistant professor/IT/SNSCE 1.
Graphics Interface 2009 The-Kiet Lu Kok-Lim Low Jianmin Zheng 1.
Handle By, S.JENILA AP/IT
Journal of Visual Communication and Image Representation
2006/10/25 1 A Virtual Endoscopy System Author : Author : Anna Vilanova 、 Andreas K ö nig 、 Eduard Gr ö ller Source :Machine Graphics and Vision, 8(3),
Robust Watermarking of 3D Mesh Models. Introduction in this paper, it proposes an algorithm that extracts 2D image from the 3D model and embed watermark.
Module 06 –environment mapping Module 06 – environment mapping Module 06 Advanced mapping techniques: Environment mapping.
Outline ● Introduction – What is the problem ● Generate stochastic textures ● Improve realism ● High level approach - Don't just jump into details – Why.
Target Tracking In a Scene By Saurabh Mahajan Supervisor Dr. R. Srivastava B.E. Project.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Outline ● Introduction – What is the problem ● Generate stochastic textures ● Improve realism ● High level approach - Don't just jump into details – Why.
Non-Photorealistic Rendering FORMS. Model dependent Threshold dependent View dependent Outline form of the object Interior form of the object Boundary.
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
Reducing Artifacts in Surface Meshes Extracted from Binary Volumes R. Bade, O. Konrad and B. Preim efficient smoothing of iso-surface meshes Plzen - WSCG.
An H.264-based Scheme for 2D to 3D Video Conversion Mahsa T. Pourazad Panos Nasiopoulos Rabab K. Ward IEEE Transactions on Consumer Electronics 2009.
Viewing. Classical Viewing Viewing requires three basic elements - One or more objects - A viewer with a projection surface - Projectors that go from.
11/25/03 3D Model Acquisition by Tracking 2D Wireframes Presenter: Jing Han Shiau M. Brown, T. Drummond and R. Cipolla Department of Engineering University.
Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia.
1 Real-Time High-Quality View-dependent Texture Mapping using Per-Pixel Visibility Damien Porquet Jean-Michel Dischler Djamchid Ghazanfarpour MSI Laboratory,
3D Single Image Scene Reconstruction For Video Surveillance Systems
Computational Photography Derek Hoiem, University of Illinois
Three Dimensional Viewing
Computer Graphics One of the central components of three-dimensional graphics has been a basic system that renders objects represented by a set of polygons.
CS5500 Computer Graphics May 29, 2006
Computer Graphics Lecture 15.
Presentation transcript:

Exploitation of 3D Video Technologies Takashi Matsuyama Graduate School of Informatics, Kyoto University 12 th International Conference on Informatics Research for Development of Knowledge Society Infrastructure (ICKS’04) 1

Outline Introduction 3D Video Generation Deformable Mesh Model Texture Mapping Algorithm Editing and Visualization System Conclusion 2

Introduction PC cluster for real-time active 3D object shape reconstruction 3

3D Video Generation Synchronized Multi- View Image Acquisition Silhouette Extraction Silhouette Volume Intersection Surface Shape Computation Texture Mapping 4

Silhouette Volume Intersection 5

Plane-to-Plane Perspective Projection – 3D voxel space is partitioned into a group of parallel planes 6

Plane-to-Plane Perspective Projection 1.Project the object silhouette observed by each camera onto a common base plane 2.Project each base plane silhouette onto the other parallel planes 3.Compute 2D intersection of all silhouettes projected on each plane 7

Linearized Plane-to-Plane Perspective Projection(LPPPP) algorithm 8

Parallel Pipeline Processing on a PC cluster system Silhouette Extraction Projection to the Base-Plane Base-Plane Silhouette Duplication Object Cross Section Computation Without camera With camera 9

Parallel Pipeline Processing on a PC cluster system average computation time for each pipeline stage 10

3D Video Generation Synchronized Multi- View Image Acquisition Silhouette Extraction Silhouette Volume Intersection Surface Shape Computation Texture Mapping 11

Deformable Mesh Model Dynamic 3D Shape reconstruction – Reconstruct 3D shape for each frame – Estimate 3D motion by establishing correspondences between frames t and t+1 Constraint – Photometric constraint – Silhouette constraint – Smoothness constraint – 3D motion flow constraint – Inertia constraint Intra-frame deformation Inter-frame deformation 12

Intra-frame deformation step 1 Convert the voxel representation into a triangle mesh. [1] step 2 Deform the mesh iteratively: – step 2.1 Compute force acting on each vertex – step 2.2 Move each vertex according to the force. – step 2.3 Terminate if all vertex motions are small enough. Otherwise go back to 2.1. [1] Y. Kenmochi, K. Kotani, and A. Imiya. Marching cubes method with connectivity. In Proc. of 1999 International Conference on Image Processing, pages 361–365, Kobe, Japan, Oct

Intra-frame deformation External Force: – satisfy the photometric constraint 14

Intra-frame deformation Internal Force: – smoothness constrain Silhouette Preserving Force: – Silhouette constrain Overall Vertex Force: 15

Performance Evaluation 16

Dynamic Shape Recovery Inter-frame deformation – A model at time t deforms its shape to satisfy the constraints at time t+1, we can obtain the shape at t+1 and the motion from t to t+1 simultaneously. 17

Dynamic Shape Recovery Define 、 、 Drift Force: – 3D Motion flow constraint Inertia Force: – Inertia constraint Overall Vertex Force: 18

Dynamic Shape Recovery 19

3D Video Generation Synchronized Multi- View Image Acquisition Silhouette Extraction Silhouette Volume Intersection Surface Shape Computation Texture Mapping 20

Viewpoint Independent Patch-Based Method Select the most “appropriate” camera for each patch 1.For each patch p i 2.Compute the locally averaged normal vector V lmn using normals of p i and its neighboring patches. 3.For each camera c j, compute viewline vector V cj directing toward the centroid of p i. 4.Select such camera c* that the angle between V lmn and V cj becomes maximum. 5.Extract the texture of p i from the image captured by camera c*. 21

Viewpoint Dependent Vertex-Based Texture Mapping Algorithm Parameters – c: camera – p: patch – n: normal vector – I: RGB value 22

Viewpoint Dependent Vertex-Based Texture Mapping Algorithm A depth buffer of c j : B cj – Record patch ID and distance to that patch from c j 23

Viewpoint Dependent Vertex-Based Texture Mapping Algorithm Visible Vertex from Camera c j : – The face of p i can be observed from camera c j – Project visible patches onto B c j – Check the visibility of each vertex using the buffer 24

Viewpoint Dependent Vertex-Based Texture Mapping Algorithm 1.Compute RGB values of all vertices visible from each camera 2.Specify the viewpoint eye 3.For each patch p i, do 4 to 9 4.If visible ( ), do 5 to 9 5.Compute weight 6.For each vertex of patch p i, do 7 to 8 7.Compute the normalized weight 8.Compute RGB value 9.Generate the texture of patch p i by linearly interpolating RGB values of its vertices 25

Performance Viewpoint Independent Patch-Based Method (VIPBM) Viewpoint Dependent Vertex-Based Texture Mapping Algorithm (VDVBM) – VDVBM-1 : including real images captured by camera c j itself – VBVBM-2 : excluding real images Mesh : converted from voxel data D-Mesh : after deformation 26

Performance 27

Performance Frame number 28

Editing and Visualization System Methods to generate camera-works – Key Frame Method – Automatic Camera-Work Generation Method Virtual Scene Setup – Virtual camera – Background – object 29

Key Frame Method Specify the parameters (positions, rotations of a virtual camera, object, etc.) for arbitrary key frames 30

Automatic Camera-Work Generation Method Object’s parameters (standing human) – Position, Height, Direction User has only to specify 1)Framing of a picture 2)Appearance of the object from the virtual camera We can compute virtual camera parameters – Distance between virtual camera and object d – Position of the virtual camera (x c, y c, z c ) 31

Automatic Camera-Work Generation Method Distance between virtual camera and object d Position of the virtual camera (x c, y c, z c ) 32

Conclusion A PC cluster system with distributed active cameras for real-time 3D shape reconstruction – Plane-based volume intersection method – Plane-to-Plane Perspective Projection algorithm – Parallel pipeline processing A dynamic 3D mesh deformation method for obtaining accurate 3D object shape A texture mapping algorithm for high fidelity visualization A user friendly 3D video editing system. 33

Reference 1)T. Matsuyama and T. Takai. “Generation, visualization, and editing of 3d video.” In Proc. of symposium on 3D Data Processing Visualization and Transmission, pages 234–245, Padova, Italy, June )T. Matsuyama, X. Wu, T. Takai, and T. Wada. “Real-time dynamic 3d object shape reconstruction and high-fidelity texture mapping for 3d video.” IEEE Trans. on Circuit and Systems for Video Technology, pages 357–369, )T. Wada, X. Wu, S. Tokai, and T. Matsuyama. “Homography based parallel volume intersection: Toward real-time reconstruction using active camera.” In Proc. of International Workshop on Computer Architectures for Machine Perception, pages 331–339, Padova, Italy, Sept