CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.

Slides:



Advertisements
Similar presentations
An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.
Advertisements

Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.
3D Graphics Rendering and Terrain Modeling
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
Light Field Rendering Shijin Kong Lijie Heng.
Lightfields, Lumigraphs, and Image-based Rendering.
Real-Time Rendering SPEACIAL EFFECTS Lecture 03 Marina Gavrilova.
Unstructured Lumigraph Rendering
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Advanced Computer Graphics CSE 190 [Spring 2015], Lecture 14 Ravi Ramamoorthi
Rendering with Concentric Mosaics Heung – Yeung Shum and Li – Wei He Presentation By: Jonathan A. Bockelman.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Computational Photography: Image-based Modeling Jinxiang Chai.
Advanced Computer Graphics (Fall 2010) CS 283, Lecture 16: Image-Based Rendering and Light Fields Ravi Ramamoorthi
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Representations of Visual Appearance COMS 6160 [Spring 2007], Lecture 4 Image-Based Modeling and Rendering
Image-Based Modeling and Rendering CS 6998 Lecture 6.
Image-Based Rendering Computer Vision CSE576, Spring 2005 Richard Szeliski.
High-Quality Video View Interpolation
Image-based Rendering of Real Objects with Complex BRDFs.
The UNIVERSITY of NORTH CAROLINA at CHAPEL HILL An Incremental Weighted Least Squares Approach To Surface Light Fields Greg Coombe Anselmo Lastra.
CSCE 641 Computer Graphics: Image-based Modeling Jinxiang Chai.
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
Image or Object? Michael F. Cohen Microsoft Research.
Jan. 19, 1999 CS260 Winter 1999-Wittenbrink, lect. 5 1 CS 260 Computer Graphics Craig M. Wittenbrink Lecture 5: Image Based Rendering Techniques: Shade.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
CSCE 641 Computer Graphics: Image Mosaicing Jinxiang Chai.
Siggraph’2000, July 27, 2000 Jin-Xiang Chai Xin Tong Shing-Chow Chan Heung-Yeung Shum Microsoft Research, China Plenoptic Sampling SIGGRAPH’2000.
Computational Photography Light Field Rendering Jinxiang Chai.
Rendering with Concentric Mosaics Heung-Yeung Shum Li-Wei he Microsoft Research.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
Convergence of vision and graphics Jitendra Malik University of California at Berkeley Jitendra Malik University of California at Berkeley.
Image Based Rendering And Modeling Techniques And Their Applications Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Real-Time High Quality Rendering CSE 291 [Winter 2015], Lecture 6 Image-Based Rendering and Light Fields
Erdem Alpay Ala Nawaiseh. Why Shadows? Real world has shadows More control of the game’s feel  dramatic effects  spooky effects Without shadows the.
Computer Graphics Inf4/MSc Computer Graphics Lecture 9 Antialiasing, Texture Mapping.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Advanced Computer Graphics (Spring 2013) CS 283, Lecture 15: Image-Based Rendering and Light Fields Ravi Ramamoorthi
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Image Based Rendering(IBR) Jiao-ying Shi State Key laboratory of Computer Aided Design and Graphics Zhejiang University, Hangzhou, China
Image-based rendering Michael F. Cohen Microsoft Research.
03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro.
Lightfields, Lumigraphs, and Other Image-Based Methods.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
Efficient Image-Based Methods for Rendering Soft Shadows SIGGRAPH 2001 Maneesh Agrawala Ravi Ramamoorthi Alan Heirich Laurent Moll Pixar Animation Studios.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
CS559: Computer Graphics Lecture 8: Warping, Morphing, 3D Transformation Li Zhang Spring 2010 Most slides borrowed from Yungyu ChuangYungyu Chuang.
112/5/ :54 Graphics II Image Based Rendering Session 11.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
Yizhou Yu Texture-Mapping Real Scenes from Photographs Yizhou Yu Computer Science Division University of California at Berkeley Yizhou Yu Computer Science.
Photo VR Editor: A Panoramic and Spherical Environment Map Authoring Tool for Image-Based VR Browsers Jyh-Kuen Horng, Ming Ouhyoung Communications and.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
1 Real-Time High-Quality View-dependent Texture Mapping using Per-Pixel Visibility Damien Porquet Jean-Michel Dischler Djamchid Ghazanfarpour MSI Laboratory,
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Advanced Computer Graphics
Image-Based Rendering
3D Graphics Rendering PPT By Ricardo Veguilla.
© 2005 University of Wisconsin
Chapter 13 Image-Based Rendering
Presentation transcript:

CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai

Outline Light field rendering Plenoptic sampling (light field sampling) 3D light field (concentric mosaics) Others

Q: How many images are needed for anti-aliased light field rendering? Review: Plenoptic Sampling

Q: How many images are needed for anti-aliased light field rendering? Review: Plenoptic Sampling A: formulate this as high-dimensional signal (4D Plenoptic function) reconstruction and sampling problem

Review: Between Two Planes Z vt t v Z1Z1 Z1Z1 Z2Z2 Z2Z2

Review: Minimal Sampling Rate 1/∆t 1/∆v Image resolution Sample interval

Review: Minimal Sampling Rate 1/∆T max >=Ω v *(f/z min -f/z max ) ΩvΩv

Review: Minimal Sampling Rate Minimal sampling rate depends on: - texture of object (Ω v ) - focal length (f) - depth complexity (z min, z max ) 1/∆T max >=Ω v *(f/z min -f/z max )

3D Plenoptic Function Image/panorama is 2D Light field/lumigraph is 4D What happens to 3D?

3D Plenoptic Function Image/panorama is 2D Light field/lumigraph is 4D What happens to 3D? - 3D light field subset - Concentric mosaic [Siggraph99]

3D light field One row of s,t plane i.e., hold t constant s,t u,v

3D light field One row of s,t plane i.e., hold t constant thus s,u,v a “row of images” s u,v

Concentric mosaics [Shum and He] Polar coordinate system: - hold r constant - thus (θ,u,v)

Concentric mosaics Why concentric mosaic? - easy to capture - relatively small in storage size - inside looking out

Concentric mosaics From above How to capture images?

Concentric mosaics From above How to capture images?

Concentric mosaics From above How to render a new image?

Concentric mosaics From above How to render a new image?

Concentric mosaics From above How to render a new image? - for each ray, retrieval the closest captured rays

Concentric mosaics From above How to render a new image? - for each ray, retrieval the closest captured rays

Concentric mosaics From above How to render a new image? - for each ray, retrieval the closest captured rays

Concentric mosaics From above How to render a new image? - for each ray, retrieval the closest captured rays How about this ray?

Concentric mosaics From above object How to retrieve the closest rays?

Concentric mosaics From above object (s,t) interpolation plane How to retrieve the closest rays?

Concentric mosaics From above object (s,t) interpolation plane How to retrieve the closest rays? What’s the optimal interpolation radius?

Concentric mosaics From above object (s,t) interpolation plane How to retrieve the closest rays? What’s the optimal interpolation radius? 2r min r max /(r min +r max )

Concentric mosaics From above object (s,t) interpolation plane How to retrieve the closest rays?

Concentric mosaics From above object (s,t) interpolation plane How to retrieval the closest rays?

Concentric mosaics From above object (s,t) interpolation plane How to retrieval the closest rays?

Concentric mosaics From above object (s,t) interpolation plane How to synthesize the color of rays?

Concentric mosaics From above object (s,t) interpolation plane How to synthesize the color of rays? - bilinear interpolation

Concentric mosaics From above

Concentric mosaics From above

Concentric mosaics From above

Concentric mosaics What are limitations?

Concentric mosaics What are limitations? - limited rendering region? - large vertical distortion

Concentric mosaics What are limitations? - limited rendering region? - large vertical distortion

2.5 D representation Image is 2D Light field/lumigraph is 4D 3D - a subset of light field - concentric mosaics 2.5D - layered depth image [Shade et al, SIGGRAPH98] - view-dependent surfaces

Layered depth image [Shade et al, SIGGRAPH98] Layered depth image: - image with depths

Layered depth image [Shade et al, SIGGRAPH98] Layered depth image: - rays with colors and depths

Layered depth image [Shade et al, SIGGRAPH98] Layered depth image: (r,g,b,depth) - image with depths - rays with colors and depths

Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image

Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Guaranteed to be in back-to-front order - Forward warping one pixel with depth

Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Guaranteed to be in back-to-front order - Forward warping one pixel with depth

Layered depth image [Shade et al, SIGGRAPH98] Rendering from layered depth image - Incremental in X and Y - Guaranteed to be in back-to-front order - Forward warping one pixel with depth How to deal with occlusion/visibility problem?

How to form LDIs Synthetic world with known geometry and texture - from multiple depth images - modified ray tracer Real images - reconstruct geometry from multiple images (e.g., voxel coloring, stereo reconstruction) - form LDIs using multiple images and reconstructed geometry

2.5 D representation Image is 2D Light field/lumigraph is 4D 3D - a subset of light field - concentric mosaics 2.5D - layered depth image [Shade et al, SIGGRAPH98] - view-dependent surfaces

View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture

View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture

View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture

View-dependent surface representation From multiple input image - reconstruct the geometry - view-dependent texture

View-dependent texture mapping [Debevec et al 98]

View-dependent texture mapping Subject's 3D proxy points V C 0 C 2 C 3 C 1  0  1 D  2  3 - Virtual camera at point D - Textures from camera C i mapped onto triangle faces - Blending weights in vertex V - Angle θ i is used to compute the weight values: w i = exp(-θ i 2 /2σ 2 )

2.5 D representation Image is 2D Light field/lumigraph is 4D 3D - a subset of light field - concentric mosaics 2.5D - layered depth image [Shade et al, SIGGRAPH98] - view-dependent surfaces

Videos: view-dependent texture mapping

The Image-Based Rendering Problem Synthesize novel views from reference images Static scenes, fixed lighting Flexible geometry and camera configurations

The ULR Algorithm [Siggraph01] Designed to work over a range of image and geometry configurations Geometric Fidelity # of Images VDTM LF

The ULR Algorithm [Siggraph01] Designed to work over a range of image and geometry configurations Geometric Fidelity # of Images VDTM LF ULR

The ULR Algorithm [Siggraph01] Designed to work over a range of image and geometry configurations Designed to satisfy desirable properties Geometric Fidelity # of Images VDTM LF ULR

Desired Camera “Light Field Rendering,” SIGGRAPH ‘96 u0u0 s0s0 u s Desired color interpolated from “nearest cameras”

Desired Camera “Light Field Rendering,” SIGGRAPH ‘96 u s Desired Property #1: Epipole consistency

Desired Camera “The Scene” “The Lumigraph,” SIGGRAPH ‘96 u Potential Artifact

“The Scene” “The Lumigraph,” SIGGRAPH ‘96 Desired Property #2: Use of geometric proxy Desired Camera

“The Lumigraph,” SIGGRAPH ‘96 “The Scene” Desired Camera

“The Lumigraph,” SIGGRAPH ‘96 “The Scene” Rebinning Note: all images are resampled. Desired Camera Desired Property #3: Unstructured input images

“The Lumigraph,” SIGGRAPH ‘96 “The Scene” Desired Property #4: Real-time implementation Desired Camera

View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98 “The Scene” Occluded Out of view

Desired Camera “The Scene” Desired Property #5: Continuous reconstruction View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98

Desired Camera “The Scene” θ1θ1 θ2θ2 θ3θ3 View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98

Desired Camera “The Scene” θ1θ1 θ2θ2 θ3θ3 Desired Property #6: Angles measured w.r.t. proxy View-Dependent Texture Mapping, SIGGRAPH ’96, EGRW ‘98

“The Scene” Desired Camera

“The Scene” Desired Property #7: Resolution sensitivity

Unstructured Lumigraph Rendering 1.Epipole consistency 2.Use of geometric proxy 3.Unstructured input 4.Real-time implementation 5.Continuous reconstruction 6.Angles measured w.r.t. proxy 7.Resolution sensitivity

Demo