An Introduction to Light Fields Mel Slater. Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions.

Slides:



Advertisements
Similar presentations
1GR2-00 GR2 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Final Review of Rendering What We Did Not Cover Learning More...
Advertisements

Chapter 8: Functions of Several Variables
Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
RealityEngine Graphics Kurt Akeley Silicon Graphics Computer Systems.
Texture Synthesis on [Arbitrary Manifold] Surfaces Presented by: Sam Z. Glassenberg* * Several slides borrowed from Wei/Levoy presentation.
Computer graphics & visualization Global Illumination Effects.
Direct Volume Rendering. What is volume rendering? Accumulate information along 1 dimension line through volume.
Week 10 - Monday.  What did we talk about last time?  Global illumination  Shadows  Projection shadows  Soft shadows.
CS 376b Introduction to Computer Vision 04 / 21 / 2008 Instructor: Michael Eckmann.
3D Graphics Rendering and Terrain Modeling
Light Fields PROPERTIES AND APPLICATIONS. Outline  What are light fields  Acquisition of light fields  from a 3D scene  from a real world scene 
The Radiance Equation Mel Slater. Outline Introduction Light Simplifying Assumptions Radiance Reflectance The Radiance Equation Traditional Rendering.
Week 9 - Wednesday.  What did we talk about last time?  Fresnel reflection  Snell's Law  Microgeometry effects  Implementing BRDFs  Image based.
18.1 Si31_2001 SI31 Advanced Computer Graphics AGR Lecture 18 Image-based Rendering Light Maps What We Did Not Cover Learning More...
Final Gathering on GPU Toshiya Hachisuka University of Tokyo Introduction Producing global illumination image without any noise.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
Global Illumination May 7, Global Effects translucent surface shadow multiple reflection.
Image or Object? Michael F. Cohen Microsoft Research.
Computational Photography Light Field Rendering Jinxiang Chai.
Paper by Alexander Keller
CSCE 641: Computer Graphics Image-based Rendering Jinxiang Chai.
View interpolation from a single view 1. Render object 2. Convert Z-buffer to range image 3. Re-render from new viewpoint 4. Use depths to resolve overlaps.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
Computer Graphics Inf4/MSc Computer Graphics Lecture 11 Texture Mapping.
Computer Graphics Mirror and Shadows
Computer Graphics Inf4/MSc Computer Graphics Lecture 9 Antialiasing, Texture Mapping.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Lightmaps from HDR Probes Bernhard Spanlang VECG Group University College London
Lecture 3 : Direct Volume Rendering Bong-Soo Sohn School of Computer Science and Engineering Chung-Ang University Acknowledgement : Han-Wei Shen Lecture.
Buffers Textures and more Rendering Paul Taylor & Barry La Trobe University 2009.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
09/11/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Graphics Pipeline Texturing Overview Cubic Environment Mapping.
Image-based rendering Michael F. Cohen Microsoft Research.
Virtual Light Field Group University College London GR/R13685/01 Research funded by: Introduction: Basic Structures of the Virtual.
CS447/ Realistic Rendering -- Radiosity Methods-- Introduction to 2D and 3D Computer Graphics.
CS 638, Fall 2001 Today Project Stage 0.5 Environment mapping Light Mapping.
Image-based Rendering. © 2002 James K. Hahn2 Image-based Rendering Usually based on 2-D imagesUsually based on 2-D images Pre-calculationPre-calculation.
1 ELEC 3105 Basic EM and Power Engineering Start Solutions to Poisson’s and/or Laplace’s.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
Global Illumination with a Virtual Light Field Mel Slater Jesper Mortensen Pankaj Khanna Insu Yu Dept of Computer Science University College London
1 Complex Images k’k’ k”k” k0k0 -k0-k0 branch cut   k 0 pole C1C1 C0C0 from the Sommerfeld identity, the complex exponentials must be a function.
CSL 859: Advanced Computer Graphics Dept of Computer Sc. & Engg. IIT Delhi.
Parametric Surfaces Define points on the surface in terms of two parameters Simplest case: bilinear interpolation s t s x(s,t)x(s,t) P 0,0 P 1,0 P 1,1.
Geometric Modeling using Polygonal Meshes Lecture 3: Discrete Differential Geometry and its Application to Mesh Processing Office: South B-C Global.
Efficient Streaming of 3D Scenes with Complex Geometry and Complex Lighting Romain Pacanowski and M. Raynaud X. Granier P. Reuter C. Schlick P. Poulin.
Reconstruction of Solid Models from Oriented Point Sets Misha Kazhdan Johns Hopkins University.
- Laboratoire d'InfoRmatique en Image et Systèmes d'information
112/5/ :54 Graphics II Image Based Rendering Session 11.
CS 325 Introduction to Computer Graphics 03 / 29 / 2010 Instructor: Michael Eckmann.
Graphics Graphics Korea University cgvr.korea.ac.kr 1 Surface Rendering Methods 고려대학교 컴퓨터 그래픽스 연구실.
FREE-VIEW WATERMARKING FOR FREE VIEW TELEVISION Alper Koz, Cevahir Çığla and A.Aydın Alatan.
Representation and modelling 3 – landscape specialisations 4.1 Introduction 4.2 Simple height field landscapes 4.3 Procedural modeling of landscapes- fractals.
Lecture 6 Rasterisation, Antialiasing, Texture Mapping,
Ray Tracing Fall, Introduction Simple idea  Forward Mapping  Natural phenomenon infinite number of rays from light source to object to viewer.
CSCE 441: Computer Graphics Ray Tracing
Local Illumination and Shading
CSCE 641 Computer Graphics: Image-based Rendering (cont.) Jinxiang Chai.
COMPUTER GRAPHICS CS 482 – FALL 2015 SEPTEMBER 29, 2015 RENDERING RASTERIZATION RAY CASTING PROGRAMMABLE SHADERS.
CDS 301 Fall, 2008 From Graphics to Visualization Chap. 2 Sep. 3, 2009 Jie Zhang Copyright ©
Virtual Light Field Group University College London Virtual Light Transport and Real-Time LF Rendering Insu Yu
Global Illumination (2) Radiosity (3). Classic Radiosity Algorithm Mesh Surfaces into Elements Compute Form Factors Between Elements Solve Linear System.
Real-Time Relief Mapping on Arbitrary Polygonal Surfaces Fabio Policarpo Manuel M. Oliveira Joao L. D. Comba.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
1 Real-Time High-Quality View-dependent Texture Mapping using Per-Pixel Visibility Damien Porquet Jean-Michel Dischler Djamchid Ghazanfarpour MSI Laboratory,
ELEC 3105 Basic EM and Power Engineering
POLYGON MESH Advance Computer Graphics
CSCE 441 Computer Graphics 3-D Viewing
3D Graphics Rendering PPT By Ricardo Veguilla.
Image Based Modeling and Rendering (PI: Malik)
CS5500 Computer Graphics May 29, 2006
Presentation transcript:

An Introduction to Light Fields Mel Slater

Outline Introduction Rendering Representing Light Fields Practical Issues Conclusions

Introduction L(p,  ) is defined over the set of all all rays (p,  ) with origins at points on surfaces – Domain of L is 5D ray space. L is constant in ‘free space’, space free of occluders: ray occluded ray in ‘free space’

Introduction: Free Space Radiance is constant along each ray in free space emanating from the convex hull around the scene

Introduction: Restrict L to 4D In free space the domain of L can be restricted to 4D L(s,t,u,v) can represent all rays between two parallel planes – a ‘light slab’. st plane uv plane Here radiance is flowing in direction uv to st.

Introduction: Light Field = 6 Light Slabs 6 light slabs can represent rays in all directions: – Near-far, Far-near – Left-right, Right-left – Bottom-top, Top-bottom Discretisation provides approximation to ‘all rays’.

Introduction:Assigning Radiance to Each Ray st plane uv plane cop Image plane Place the cop at each grid point on st plane Use uv plane as an image plane. Each ray is assigned a radiance.

Introduction: Synthesising a New Image st plane uv plane Image plane cop New image plane s0s0 s1s1 s2s2 s3s3 s4s4 u0u0 u1u1 u2u2 u3u3 u4u4 The image ray is approximated By L(s 3, u 1 )

Introduction: Summary Light field is a discrete representation of L restricted to the 4D free space domain. It relies on production of a large set of images under strict viewing conditions. Most often used for virtual images of real scenes – based on digital photographs. Can be used for synthetic scenes – not much point unless globally illuminated Supports real-time walkthrough of scenes.

Rendering Each st grid-point is associated with a uv image s t u v Let a texture map consisting of the image be associated with st grid-point.

Rendering: Texture Mapping Each st grid-point square is projected to the uv plane t u v a b c d cop The st square is rendered with the texture map and texture coords abcd s

Rendering: Quadrilinear Interpolation Perform linear interpolation on s Repeat expansion for t,u,v. s0s0 s1s1 u0u0 u1u1 v0v0 t0t0 t1t1 v1v1 L(s,t 0,u 0,v 0 )=  0 L(s 0,t 0,u 0,v 0 )+  1 L(s 1,t 0,u 0,v 0 ) (s,t) (u,v)

Efficient Rendering: Linear- Bilinear Interpolation Gortler shows that texture mapping cannot be used to reproduce quadrilinear interpolation, but linear (on st), bilinear (on uv). P(s,t) A BC D EF For the grid-point P render each of the triangles ABP, BCP, CDP, DEP, EFP,FAP with alpha-blending and texture mapping:  =1 at P and  =0 at each vertex.

Rendering: Depth Correction Use approximate polygon mesh to provide depth information. cop s0s0 s1s1 s2s2 u0u0 u1u1 u2u2 u3u3 u4u4 object surface Normally s 1 u 4 would be used to approximate viewing ray. s 1 u 3 is more accurate.

Representation The 2PP has many computational advantages: – Efficient representation of images – Texture mapping hardware used for rendering – Linear properties of the representation. It is not a ‘uniform’ representation – As the viewpoint moves so the density and distribution of close-by rays will change, and thus image quality will change during walkthrough.

Representation: Using Spheres Camahort and Fussel, 1999 carried out study of 3 representations: – 2PP – 2-plane – 2SP – 2-sphere – DPP – direction and point DPP theoretically shown to introduce less sampling bias, greater uniformity.

Practical Issues Idea placement of st and uv planes? – st outside the scene, and uv ideally through ‘centre’ of scene. – Ideally uv grid-points as close as possible to scene geometry Implication is that scenes with large depth range cannot be adequately represented.

Practical Issues: Resolution What resolution should be used for st, uv? – st (M  M) can afford to be sampled at lower rate, since the accuracy depends on uv (N  N), close to scene surfaces. – Typical values are M=32, N=256 for synthesis of 256  256 resolution images

Practical Issues: Memory Suppose N=256, M=32 – Each light slab requires 2 8  2 8  2 5  2 5 =2 26 rays – Each ray carries 3Bytes (RGB) – There are 6 light slabs. Full representation therefore requires: – 6  3  2 26 > 1Gb memory. Compression scheme needed (vector quantization used in practice).

Conclusions LF approach offers a different paradigm with many disadvantages, but interesting possibilities: – Exploits current hardware for a brute-force ‘solution’ of radiance equation. – Image of real scenes or synthetic scenes (and combinations). – Rendering time independent of scene complexity. – Promise of real-time walkthrough for global illumination