Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa

Slides:



Advertisements
Similar presentations
For Internal Use Only. © CT T IN EM. All rights reserved. 3D Reconstruction Using Aerial Images A Dense Structure from Motion pipeline Ramakrishna Vedantam.
Advertisements

Multimedia Specification Design and Production 2012 / Semester 1 / week 6 Lecturer: Dr. Nikos Gazepidis
Tracking Multiple Occluding People by Localizing on Multiple Scene Planes Professor :王聖智 教授 Student :周節.
Dana Cobzas-PhD thesis Image-Based Models with Applications in Robot Navigation Dana Cobzas Supervisor: Hong Zhang.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Vision, Video and Virtual Reality Omnidirectional Vision Lecture 6 Omnidirectional Cameras CSC 59866CD Fall 2004 Zhigang Zhu, NAC 8/203A
3D Computer Vision and Video Computing Omnidirectional Vision Topic 11 of Part 3 Omnidirectional Cameras CSC I6716 Spring 2003 Zhigang Zhu, NAC 8/203A.
C C V C L Sensor and Vision Research for Potential Transportation Applications Zhigang Zhu Visual Computing Laboratory Department of Computer Science City.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
1 Image-Based Visual Hulls Paper by Wojciech Matusik, Chris Buehler, Ramesh Raskar, Steven J. Gortler and Leonard McMillan [
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Image Stitching and Panoramas
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
A Novel 2D To 3D Image Technique Based On Object- Oriented Conversion.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
1 Angel: Interactive Computer Graphics 4E © Addison-Wesley 2005 Models and Architectures Ed Angel Professor of Computer Science, Electrical and Computer.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Multiple View Geometry : Computational Photography Alexei Efros, CMU, Fall 2006 © Martin Quinn …with a lot of slides stolen from Steve Seitz and.
Digital Archive of Cultural Heritages - Three Dimensional Modeling of Cultural Heritages and its use for Educational Purposes - Atsushi Nakazawa, Takeshi.
Graphics Graphics Korea University cgvr.korea.ac.kr Model Construction 고려대학교 컴퓨터 그래픽스 연구실.
A General Framework for Tracking Multiple People from a Moving Camera
Visual Perception PhD Program in Information Technologies Description: Obtention of 3D Information. Study of the problem of triangulation, camera calibration.
MESA LAB Multi-view image stitching Guimei Zhang MESA LAB MESA (Mechatronics, Embedded Systems and Automation) LAB School of Engineering, University of.
High-Resolution Interactive Panoramas with MPEG-4 발표자 : 김영백 임베디드시스템연구실.
CSC 461: Lecture 3 1 CSC461 Lecture 3: Models and Architectures  Objectives –Learn the basic design of a graphics system –Introduce pipeline architecture.
1Computer Graphics Lecture 4 - Models and Architectures John Shearer Culture Lab – space 2
Augmented Reality and 3D modelling By Stafford Joemat Supervised by Mr James Connan.
Asian Institute of Technology
Tutorial Visual Perception Towards Computer Vision
1 Developing a 2.5-D video avatar Tamagawa, K.; Yamada, T.; Ogi, T.; Hirose, M.; Signal Processing Magazine, IEEE Volume 18, Issue 3, May 2001 Page(s):35.
55:148 Digital Image Processing Chapter 11 3D Vision, Geometry Topics: Basics of projective geometry Points and hyperplanes in projective space Homography.
Journal of Visual Communication and Image Representation
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
A Framework for Perceptual Studies in Photorealistic Augmented Reality Martin Knecht 1, Andreas Dünser 2, Christoph Traxler 1, Michael Wimmer 1 and Raphael.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
1© 2009 Autodesk Hardware Shade – Presenting Your Designs Hardware and Software Shading HW Shade Workflow Tessellation Quality Settings Lighting Settings.
A Plane-Based Approach to Mondrian Stereo Matching
Introducing virtual REALITY
Padmasri Dr.BV Raju Institute Of Technology
Prepared by jajal patel (09dit008.)
Augmented Reality.
Building Omnicam and Multisensor Cameras
A Novel 2D-to-3D Conversion System Using Edge Information
Real-time Walkthrough of Virtual Space using Environment Map
Model Construction cgvr.korea.ac.kr.
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
3D TV TECHNOLOGY.
A Forest of Sensors: Tracking
3D Graphics Rendering PPT By Ricardo Veguilla.
Omnidirectional Vision
Models and Architectures
Models and Architectures
Models and Architectures
Introduction to Computer Graphics with WebGL
Coding Approaches for End-to-End 3D TV Systems
Iterative Optimization
Omnidirectional Vision
Omnidirectional epipolar geometry
Chapter 1: Image processing and computer vision Introduction
Mixed Reality Server under Robot Operating System
Multiple View Geometry for Robotics
Image processing and computer vision
Models and Architectures
Models and Architectures
Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.
Occlusion and smoothness probabilities in 3D cluttered scenes
Fig. 2 System and method for determining retinal disparities when participants are engaged in everyday tasks. System and method for determining retinal.
Presentation transcript:

Construction of an Immersive Mixed Environment Using an Omnidirectional Stereo Image Sensor Jun Shimamura, Naokazu Yokoya, Haruo Takemura and Kazumasa Yamazawa Graduate School of Information Science Nara Institute of Science and Technology JAPAN

IEEE Workshop on Omnidirectional Vision 2000 Motivation How to digitize a dynamic outdoor scene for creating an immersive virtualized environment where the followings are possible. (1) interactive viewing of the environment with 3-D sensation and (2) computer graphics (CG) object synthesis maintaining correct occlusion between real and virtual objects. June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

IEEE Workshop on Omnidirectional Vision 2000 Our approach A full panoramic 3-D model is made from a cylindrical panoramic image using a video-rate omnidirectional stereo image sensor CG objects are merged into the full panoramic 3-D model of real scene with correct occlusion A prototype of immersive mixed reality system using a large cylindrical screen is developed Our approach is based on acquiring full panoramic 3-D models of real worlds using a video-rate omnidirectional stereo image sensor [5]. The sensor can produce a pair of high-resolution omnidirectional binocu-lar images that satisfy the single viewpoint constraint; that is, each omnidirectional image is obtained from a single effective viewpoint. A full panoramic 3-D model is made from a cylindrical panoramic image and depth to each pixel from the center the cylinder. The secondary objective is to construct a prototype system which presents a mixed environment consisting of real scene model and CG model. CG objects are merged into the full panoramic 3-D model of real scene maintaining consistent occlusion between real and virtual objects. Thus it is possible to yield rich 3-D sensation with binocular and motion parallax. We have developed a prototype of immersive mixed reality system using a large cylindrical screen, in which a user can walk through the mixed environment and can manipulate CG objects in real time. June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Omnidirectional Stereo Imaging Sensor Twelve CCD cameras and two hexagonal pyramidal mirrors. June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Generation of panoramic stereo images Elimination of geometric distortion in images, Color adjustment of different camera images, Concatenation of six images for completing upper and lower omnidirectional images of stereo pair. June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Panoramic stereo images June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Virtualizing a Dynamic Real Scene Layered representation of a dynamic scene Static scene image generation Moving object extraction Depth estimation from static panoramic stereo image and moving object Generation of layered 3-D model June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

IEEE Workshop on Omnidirectional Vision 2000 A pair of panoramic stereo images of a static scene without moving objects. Majority Filtering Dynamic Panoramic Images Static Image June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Extracted moving object regions in the upper panoramic image. June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

IEEE Workshop on Omnidirectional Vision 2000 Depth estimation from panoramic stereo images June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Generating Depth Image June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Panoramic depth map generated from panoramic stereo images June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Generating Full Panoramic Scene Model June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Bird’s-eye view of texture-mapped 3-D scene model. June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

IEEE Workshop on Omnidirectional Vision 2000 Cylindrical 3-D model June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Immersive Mixed Reality System Prototype system with 330 degree cylindrical screen Merging virtual objects into a virtualized real world scene Superimposing dynamic event layers onto static scene layer June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Cylindrical Display System 6144x768 pixel resolution June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Cylindrical Display (Video) 6 Projectors June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Merging virtual objects Background: 13400 Polygons CG Tree: 41340 Polygons Total: 54740 polygons June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Superimposing dynamic event June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

Superimposing dynamic event June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

IEEE Workshop on Omnidirectional Vision 2000 Summary A novel method of constructing a large scale mixed environment. A combination of cylindrical 3-D model of a real scene and polygonal CG objects. Depth values are appended to the cylindrical panoramic image of real scene, so that CG and 3-D model has correct occlusion. A prototype system was built with 330 degree cylindrical screen system. June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

IEEE Workshop on Omnidirectional Vision 2000 Future work Extending the model far larger by using multiple panoramic stereo images. Implementing an algorithm that smoothly switches constructed models when user’s viewpoint changes. June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000

IEEE Workshop on Omnidirectional Vision 2000 Thank you for your attention June 12, 2000 IEEE Workshop on Omnidirectional Vision 2000