A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005.

Slides:



Advertisements
Similar presentations
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Advertisements

Virtual Me. Motion Capture (mocap) Motion capture is the process of simulating actual movement in a computer generated environment The capture subject.
Structured Light principles Figure from M. Levoy, Stanford Computer Graphics Lab.
www-video.eecs.berkeley.edu/research
Real-Time Rendering TEXTURING Lecture 02 Marina Gavrilova.
Mitsubishi Electric Research Laboratories August 2006 Mitsubishi Electric Research Labs (MERL) Cambridge, MA Instant Replay: Inexpensive High Speed Motion.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
Range Imaging and Pose Estimation of Non-Cooperative Targets using Structured Light Dr Frank Pipitone, Head, Sensor Based Systems Group Navy Center for.
VIPER DSPS 1998 Slide 1 A DSP Solution to Error Concealment in Digital Video Eduardo Asbun and Edward J. Delp Video and Image Processing Laboratory (VIPER)
Stereo.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
Light Field Compression Using 2-D Warping and Block Matching Shinjini Kundu Anand Kamat Tarcar EE398A Final Project 1 EE398A - Compression of Light Fields.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Multi video camera calibration and synchronization.
SIGGRAPH Course 30: Performance-Driven Facial Animation Section: Markerless Face Capture and Automatic Model Construction Part 2: Li Zhang, Columbia University.
1 MURI review meeting 09/21/2004 Dynamic Scene Modeling Video and Image Processing Lab University of California, Berkeley Christian Frueh Avideh Zakhor.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2005 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Stereoscopic Light Stripe Scanning: Interference Rejection, Error Minimization and Calibration By: Geoffrey Taylor Lindsay Kleeman Presented by: Ali Agha.
Srikumar Ramalingam Department of Computer Science University of California, Santa Cruz 3D Reconstruction from a Pair of Images.
CS6500 Adv. Computer Graphics © Chun-Fa Chang, Spring 2003 Adv. Computer Graphics CS6500, Spring 2003.
High-Quality Video View Interpolation
Input/Output Devices Graphical Interface Systems Dr. M. Al-Mulhem Feb. 1, 2008.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Automatic Camera Calibration for Image Sequences of a Football Match Flávio Szenberg (PUC-Rio) Paulo Cezar P. Carvalho (IMPA) Marcelo Gattass (PUC-Rio)
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Xinqiao LiuRate constrained conditional replenishment1 Rate-Constrained Conditional Replenishment with Adaptive Change Detection Xinqiao Liu December 8,
Stockman MSU/CSE Math models 3D to 2D Affine transformations in 3D; Projections 3D to 2D; Derivation of camera matrix form.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Stereo matching Class 10 Read Chapter 7 Tsukuba dataset.
My Research Experience Cheng Qian. Outline 3D Reconstruction Based on Range Images Color Engineering Thermal Image Restoration.
Yuping Lin and Gérard Medioni.  Introduction  Method  Register UAV streams to a global reference image ▪ Consecutive UAV image registration ▪ UAV to.
A Brief Overview of Computer Vision Jinxiang Chai.
Finish Adaptive Space Carving Anselmo A. Montenegro †, Marcelo Gattass ‡, Paulo Carvalho † and Luiz Velho † †
Real-Time High Resolution Photogrammetry John Morris, Georgy Gimel’farb and Patrice Delmas CITR, Tamaki Campus, University of Auckland.
: Chapter 12: Image Compression 1 Montri Karnjanadecha ac.th/~montri Image Processing.
Automatic Registration of Color Images to 3D Geometry Computer Graphics International 2009 Yunzhen Li and Kok-Lim Low School of Computing National University.
© 2011 The McGraw-Hill Companies, Inc. All rights reserved Chapter 6: Video.
Video Video.
DIGITAL Video. Video Creation Video captures the real world therefore video cannot be created in the same sense that images can be created video must.
Advanced Computer Technology II FTV and 3DV KyungHee Univ. Master Course Kim Kyung Yong 10/10/2015.
Passage Three Multimedia Application. Training target: In this part , you should try your best to form good reading habits. In order to avoid your ill.
A Multi-Spectral Structured Light 3D Imaging System MATTHEW BEARDMORE MATTHEW BOWEN.
A General-Purpose Platform for 3-D Reconstruction from Sequence of Images Ahmed Eid, Sherif Rashad, and Aly Farag Computer Vision and Image Processing.
1 Reconstructing head models from photograph for individualized 3D-audio processing Matteo Dellepiane, Nico Pietroni, Nicolas Tsingos, Manuel Asselot,
Lec 22: Stereo CS4670 / 5670: Computer Vision Kavita Bala.
Hardware-accelerated Rendering of Antialiased Shadows With Shadow Maps Stefan Brabec and Hans-Peter Seidel Max-Planck-Institut für Informatik Saarbrücken,
1 Research Question  Can a vision-based mobile robot  with limited computation and memory,  and rapidly varying camera positions,  operate autonomously.
CS559: Computer Graphics Lecture 8: Warping, Morphing, 3D Transformation Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Multiplexers and De-Multiplexers 1
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
Subject Name: Computer Graphics Subject Code: Textbook: “Computer Graphics”, C Version By Hearn and Baker Credits: 6 1.
Triangulation Scanner Design Options
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Visual Odometry for Ground Vehicle Applications David Nistér, Oleg Naroditsky, and James Bergen Sarnoff Corporation CN5300 Princeton, New Jersey
Chapter 1: Image processing and computer vision Introduction
3D Reconstruction Using Image Sequence
Camera surface reference images desired ray ‘closest’ ray focal surface ‘closest’ camera Light Field Parameterization We take a non-traditional approach.
3D Animation 1. Introduction Dr. Ashraf Y. Maghari Information Technology Islamic University of Gaza Ref. Book: The Art of Maya.
José Manuel Iñesta José Martínez Sotoca Mateo Buendía
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Chapter 6: Video.
Coding Approaches for End-to-End 3D TV Systems
Chapter 1: Image processing and computer vision Introduction
CMSC 426: Image Processing (Computer Vision)
Analog Transmission Example 1
Presentation transcript:

A Camera-Projector System for Real-Time 3D Video Marcelo Bernardes, Luiz Velho, Asla Sá, Paulo Carvalho IMPA - VISGRAF Laboratory Procams 2005

Overview How it works (b,s)-BCSL code Video + (b,s)-BCSL code Reconstruction pipeline Snapshots Why it works? Discussions

How it works - Step 1 Projecting and capturing color patterns –Two slides (S1, S2) having vertical color stripes specially coded are projected on the object. Each slide is followed by the projection of its color complement. –A camera captures the four projected patterns on scene. S1 S1’ S2’ S2 t0t0 t1t1 t2t2 t3t3 Object Color slides

How it works - Step 2 Camera/projector correspondence –Zero crossings and projected color stripes are robustly identified in camera images using complementary slides. –Projected color sequences are decoded for each zero crossing giving camera/projector correspondence. - - Zero crossingsProjected colors Zero crossings in camera space Corresponding stripe boundaries in projector space +

How it works - Step 3 Photometry and geometry reconstruction –Geometry is computed using camera/projector correspondence images and calibration matrices. –Texture image is obtained by a simple combination of each complementary slide pair. For example, the maximum of each channel gives an image that approximates the full white projector light. Reconstructed texture Reconstructed geometry + *

(b,s)-BCSL code The (b,s)-BCSL method defines a coding/decoding procedure for unambiguously finding the number of a stripe transition using s slides and b colors. In our case, we use b=6 colors (R,G,B,Y,M,C) and s=2 slides which gives two codes of length 900 as illustrated below: The color transitions R,G in slide 1 and G,C in slide 2 uniquely map to the transition number p in the O(1) decoding procedure. Slide 2 Slide 1 Stripe transition number = p R G CG … … 1 … … p 899 Slide 2 color sequence: Slide 1 color sequence: p+1 p-1

Video + (b,s)-BCSL code The key for real time 3D video is the combination of the (b,s)-BCSL code with video stream. Our scheme has the following features: –A video signal is generated in such a way that each frame contains a slide in the even field the and its complement in the odd field. Frames (S1,S1’) and (S2,S2’) are interleaved in time. –The video signal is sent to both projector and camera. They are synchronized through a genlock signal. –Synchronized camera output is grabbed and pushed into the reconstruction pipeline.

Reconstruction pipeline Our reconstruction pipeline is as simple as possible achieving real time 3D video with high quality geometry and photometry at 30Hz. This is possible because: –Every input frame captured gives a new texture image (by combining both fields). –New zero crossings and projected color map are computed for every input frame and correlated to the previous frame zero crossings and projected color map. The (b,s)-BCSL decoded transitions give a new geometry set. The following diagram illustrates the reconstruction pipeline. The frame arrived at time t i gives texture p i from its fields and geometry g i by correlation with the frame arrived at time t i-1. 3D Data

Snapshots The bunny-cube 3D video: geometry texture Composed virtual scene

Snapshots Moving hand (rendered with lines) Computed surface normals

Snapshots Face and mouth movement

Snapshots Walking around

Why It works? Complementary slides projection is suitable for both photometry and geometry detection. Projected stripe colors are robustly detected through camera/projector color calibration. Stripe transitions are robustly detected by zero- crossings. Slides are captured at 60Hz. This is fast enough for capturing “reasonably normal” motion between consecutive frames. Transition decoding is performed in O(1). While objects move the stripes projected over their surface remain practically stationary!

Discussion Current System Embodiment uses NTSC video Pros and Cons Standard off-the-shelf equipment –Widely Available and Good Cost-Benefit Small resolution –640x240 per field. (It reduces the maximum number of stripes around 75.) –Composite video signal has poor color fidelity. (It reduces the transition detection precision at stripe boundaries). Solution: High Definition Digital Video.

Alternatives for 3D Video Technologies –Range Sensors (requires additional camera) –Stereo Methods Passive Stereo (not robust for real-time) Active Stereo (needs a projected pattern)  Our system: active stereo with complementary color patterns –Drawback: projector varying light can be uncomfortable for human subjects –Solution: switching complementary slides leads to white light perception as the projection/capture frequency reaches the fusion limit.