Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.

Slides:



Advertisements
Similar presentations
Parameterized Environment Maps
Advertisements

1GR2-00 GR2 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Final Review of Rendering What We Did Not Cover Learning More...
Course Note Credit: Some of slides are extracted from the course notes of prof. Mathieu Desburn (USC) and prof. Han-Wei Shen (Ohio State University). CSC.
Interactive Deformation of Light Fields Billy Chen, Marc Levoy Stanford University Eyal Ofek, Heung-Yeung Shum Microsoft Research Asia To appear in the.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
Introduction to Image-Based Rendering Jian Huang, CS 594, Spring 2002 A part of this set of slides reference slides used at Standford by Prof. Pat Hanrahan.
Part I: Basics of Computer Graphics
Acquiring the Reflectance Field of a Human Face Paul Debevec, Tim Hawkins, Chris Tchou, Haarm-Pieter Duiker, Westley Sarokin, Mark Sagar Haarm-Pieter Duiker,
Real-Time Rendering SPEACIAL EFFECTS Lecture 03 Marina Gavrilova.
A new approach for modeling and rendering existing architectural scenes from a sparse set of still photographs Combines both geometry-based and image.
Chapter 6: Vertices to Fragments Part 2 E. Angel and D. Shreiner: Interactive Computer Graphics 6E © Addison-Wesley Mohan Sridharan Based on Slides.
18.1 Si31_2001 SI31 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Advanced Computer Graphics (Spring 2005) COMS 4162, Lecture 21: Image-Based Rendering Ravi Ramamoorthi
Final Gathering on GPU Toshiya Hachisuka University of Tokyo Introduction Producing global illumination image without any noise.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Copyright  Philipp Slusallek IBR: View Interpolation Philipp Slusallek.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
CS 563 Advanced Topics in Computer Graphics View Interpolation and Image Warping by Brad Goodwin Images in this presentation are used WITHOUT permission.
 Marc Levoy IBM / IBR “The study of image-based modeling and rendering is the study of sampled representations of geometry.”
CS 563 Advanced Topics in Computer Graphics Introduction To IBR By Cliff Lindsay Slide Show ’99 Siggraph[6]
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
David Luebke Modeling and Rendering Architecture from Photographs A hybrid geometry- and image-based approach Debevec, Taylor, and Malik SIGGRAPH.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
University of Texas at Austin CS 378 – Game Technology Don Fussell CS 378: Computer Game Technology Beyond Meshes Spring 2012.
1 Occlusion Culling ©Yiorgos Chrysanthou, , Anthony Steed, 2004.
Shadows Computer Graphics. Shadows Shadows Extended light sources produce penumbras In real-time, we only use point light sources –Extended light sources.
Erdem Alpay Ala Nawaiseh. Why Shadows? Real world has shadows More control of the game’s feel  dramatic effects  spooky effects Without shadows the.
1 Computer Graphics Week13 –Shading Models. Shading Models Flat Shading Model: In this technique, each surface is assumed to have one normal vector (usually.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Computer Visualization BIM Curriculum 03. Topics  History  Computer Visualization Methods  Visualization Workflow  Technology Background.
Image-Based Rendering. 3D Scene = Shape + Shading Source: Leonard mcMillan, UNC-CH.
Reflections Specular reflection is the perfect reflection of light from a surface. The law a reflection states that the direction of the incoming ray and.
Синтез изображений по изображениям. Рельефные текстуры.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
Graphics Graphics Korea University cgvr.korea.ac.kr 1 Chapter 6 Special Effects 강 신 진강 신 진
09/11/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Graphics Pipeline Texturing Overview Cubic Environment Mapping.
03/12/03© 2003 University of Wisconsin Last Time NPR Assignment Projects High-Dynamic Range Capture Image Based Rendering Intro.
Rendering Overview CSE 3541 Matt Boggus. Rendering Algorithmically generating a 2D image from 3D models Raster graphics.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
CS 395: Adv. Computer Graphics Light Fields and their Approximations Jack Tumblin
Image Based Rendering. Light Field Gershun in 1936 –An illuminated objects fills the surrounding space with light reflected of its surface, establishing.
1 Perception and VR MONT 104S, Fall 2008 Lecture 21 More Graphics for VR.
03/09/05© 2005 University of Wisconsin Last Time HDR Image Capture Image Based Rendering –Improved textures –Quicktime VR –View Morphing NPR Papers: Just.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
112/5/ :54 Graphics II Image Based Rendering Session 11.
03/31/03© 2003 University of Wisconsin Last Time Image-Based Rendering for Architecture and Faces.
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Modeling, CG, and others Jyun-Ming Chen Fall 2001.
Ray Tracing Fall, Introduction Simple idea  Forward Mapping  Natural phenomenon infinite number of rays from light source to object to viewer.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
Photo VR Editor: A Panoramic and Spherical Environment Map Authoring Tool for Image-Based VR Browsers Jyh-Kuen Horng, Ming Ouhyoung Communications and.
Image-based Rendering Ref: RTRv2. 2 Introduction Sprite, billboard, overview.
LOD Unresolved Problems The LOD algorithms discussed previously do not perform well with large amounts of visible objects Consider a large number of tress.
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
Image-Based Rendering Geometry and light interaction may be difficult and expensive to model –Think of how hard radiosity is –Imagine the complexity of.
09/23/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Reflections Shadows Part 1 Stage 1 is in.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Rendering Pipeline Fall, 2015.
Real-time Walkthrough of Virtual Space using Environment Map
Image-Based Rendering
Image Based Methods of Terrain Rendering Reimar Schubert and A
Image-Based Rendering
Modeling 101 For the moment assume that all geometry consists of points, lines and faces Line: A segment between two endpoints Face: A planar area bounded.
© 2005 University of Wisconsin
Real-time Rendering Shadow Maps
Chapter 13 Image-Based Rendering
Week 11 - Monday CS361.
Presentation transcript:

Image-based Rendering

© 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation –Pre-rendering (speed) –From real photographs (speed and realism) Usually for static scene and moving viewpointUsually for static scene and moving viewpoint Rendering time decoupled from scene complexityRendering time decoupled from scene complexity Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation –Pre-rendering (speed) –From real photographs (speed and realism) Usually for static scene and moving viewpointUsually for static scene and moving viewpoint Rendering time decoupled from scene complexityRendering time decoupled from scene complexity

© 2002 James K. Hahn3 2-D Techniques Warp reference image(s) to generate required imageWarp reference image(s) to generate required image Consider images as texture mapsConsider images as texture maps Use hardware for handling texturesUse hardware for handling textures Warp reference image(s) to generate required imageWarp reference image(s) to generate required image Consider images as texture mapsConsider images as texture maps Use hardware for handling texturesUse hardware for handling textures

© 2002 James K. Hahn4 Sprites Billboard: 2-D image that is handled as a 3-D objectBillboard: 2-D image that is handled as a 3-D object –E.g. image of a tree kept perpendicular to direction of view Imposters: generalization of billboardImposters: generalization of billboard –May be pre-calculated to correspond to each of its bounding box sides –Imposter corresponding to the side which face the viewpoint is used –Rendered as texture map –Warped as the viewpoint moves Billboard: 2-D image that is handled as a 3-D objectBillboard: 2-D image that is handled as a 3-D object –E.g. image of a tree kept perpendicular to direction of view Imposters: generalization of billboardImposters: generalization of billboard –May be pre-calculated to correspond to each of its bounding box sides –Imposter corresponding to the side which face the viewpoint is used –Rendered as texture map –Warped as the viewpoint moves

© 2002 James K. Hahn5 Error of planar imposters V0V0 V1V1 Impostor X’ x1x1 x2x2 As viewpoint moves from to no longer represents both andAs viewpoint moves from V 0 to V 1 X’ no longer represents both x 1 and x 2 If angle  is less than that subtended by pixel, acceptable error Amount of warp constrained No motion parallax As viewpoint moves from to no longer represents both andAs viewpoint moves from V 0 to V 1 X’ no longer represents both x 1 and x 2 If angle  is less than that subtended by pixel, acceptable error Amount of warp constrained No motion parallax 

© 2002 James K. Hahn6 Image layering “2 ½ D” rendering 3D scene is segmented into different layers – –Objects assigned to different layers roughly according to distance to viewer Rendering resources allocated to different “display memory” by spatial and/or temporal sampling rates – –Can be prioritized by distance or speed Each layer results in sprites that are then warped according to viewing direction Sprites are then composited into final output image “2 ½ D” rendering 3D scene is segmented into different layers – –Objects assigned to different layers roughly according to distance to viewer Rendering resources allocated to different “display memory” by spatial and/or temporal sampling rates – –Can be prioritized by distance or speed Each layer results in sprites that are then warped according to viewing direction Sprites are then composited into final output image

© 2002 James K. Hahn7 Using depth information Layers or sprites with depth information (not per-pixel depth) Images with z-buffer (per-pixel depth) Layered depth images (LDI) – –Single view of scene with multiple pixels along each line of sight – –Complexity a function of depth complexity (average number of surfaces that project onto a pixel) Layers or sprites with depth information (not per-pixel depth) Images with z-buffer (per-pixel depth) Layered depth images (LDI) – –Single view of scene with multiple pixels along each line of sight – –Complexity a function of depth complexity (average number of surfaces that project onto a pixel)

© 2002 James K. Hahn8 Images with z-buffer For each I(x,y) warp to I(x’,y’) as viewpoint moves to a new location – –x’, y’ a function of x, y, z, and transformation of viewpoint Image folding problem: more than one pixel in the reference image maps into a single pixel in extrapolated view Holes due to occluded point in reference image becoming visible in extrapolated view Holes due to “stretching” For each I(x,y) warp to I(x’,y’) as viewpoint moves to a new location – –x’, y’ a function of x, y, z, and transformation of viewpoint Image folding problem: more than one pixel in the reference image maps into a single pixel in extrapolated view Holes due to occluded point in reference image becoming visible in extrapolated view Holes due to “stretching”

© 2002 James K. Hahn9 Layered depth images (LDI) 3-D structure for a particular viewpoint For each pixel, store information for all surfaces that it intersects – –Color, surface normal, depth Generated by ray-tracing or warping n images (with depth information) from different viewpoints During rendering, incremental warp of each layer in back to front order 3-D structure for a particular viewpoint For each pixel, store information for all surfaces that it intersects – –Color, surface normal, depth Generated by ray-tracing or warping n images (with depth information) from different viewpoints During rendering, incremental warp of each layer in back to front order

© 2002 James K. Hahn10 View interpolation Frames required for walkthrough from – –set of reference images – –warp script that describe corresponding pixels (pixel motion) View morphing – –Generate interpolated view from reference images – –Interpolated transformation that preserve object shape – –Need to know camera parameters Frames required for walkthrough from – –set of reference images – –warp script that describe corresponding pixels (pixel motion) View morphing – –Generate interpolated view from reference images – –Interpolated transformation that preserve object shape – –Need to know camera parameters

© 2002 James K. Hahn11 Lumigraph (light field rendering) For each point in the scene, pre-calculate and store the radiance in every direction at that point Assume occluder-free space (along a ray, radiance is constant) Parameterized by 4-D function: two parallel planes with (s, t) and (u, v) parameterization Can be generated from photography by taking a picture at discrete points in (s, t) For each point in the scene, pre-calculate and store the radiance in every direction at that point Assume occluder-free space (along a ray, radiance is constant) Parameterized by 4-D function: two parallel planes with (s, t) and (u, v) parameterization Can be generated from photography by taking a picture at discrete points in (s, t)

© 2002 James K. Hahn12 Photo-modeling Generation of 3-D model from photography Allow rich textures to be used from real world Much user intervention by specifying correspondence with known geometry Generation of 3-D model from photography Allow rich textures to be used from real world Much user intervention by specifying correspondence with known geometry

© 2002 James K. Hahn13 Photographic panorama E.g. Apple Computer’s QuickTime VR Individual images stitched into cylindrical panorama Given a viewpoint, can pan in any direction in real- time E.g. Apple Computer’s QuickTime VR Individual images stitched into cylindrical panorama Given a viewpoint, can pan in any direction in real- time

© 2002 James K. Hahn14