Light Field Mapping: Hardware-Accelerated Visualization of Surface Light Fields.

Slides:



Advertisements
Similar presentations
Real-Time Rendering 靜宜大學資工研究所 蔡奇偉副教授 2010©.
Advertisements

Surface Simplification using Quadric Error Metrics Guowei Wu.
Graphics Pipeline.
1 Online Construction of Surface Light Fields By Greg Coombe, Chad Hantak, Anselmo Lastra, and Radek Grzeszczuk.
Computer vision: models, learning and inference
Signal-Specialized Parameterization for Piecewise Linear Reconstruction Geetika Tewari, Harvard University John Snyder, Microsoft Research Pedro V. Sander,
HCI 530 : Seminar (HCI) Damian Schofield.
Copyright  Philipp Slusallek Cs fall IBR: Model-based Methods Philipp Slusallek.
Motion Analysis Slides are from RPI Registration Class.
Shadow Silhouette Maps Pradeep Sen, Mike Cammarano, Pat Hanrahan Stanford University.
Approximate Soft Shadows on Arbitrary Surfaces using Penumbra Wedges Tomas Akenine-Möller Ulf Assarsson Department of Computer Engineering, Chalmers University.
Surface Light Fields for 3D Photography Daniel N. Wood University of Washington SIGGRAPH 2001 Course.
Paper by Alexander Keller
Camera parameters Extrinisic parameters define location and orientation of camera reference frame with respect to world frame Intrinsic parameters define.
Smooth Geometry Images Frank Losasso, Hugues Hoppe, Scott Schaefer, Joe Warren.
CSCE 641 Computer Graphics: Image-based Modeling (Cont.) Jinxiang Chai.
The Story So Far The algorithms presented so far exploit: –Sparse sets of images (some data may not be available) –User help with correspondences (time.
09/18/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Bump Mapping Multi-pass algorithms.
9/20/2001CS 638, Fall 2001 Today Finishing Up Reflections More Multi-Pass Algorithms Shadows.
SVD(Singular Value Decomposition) and Its Applications
Erdem Alpay Ala Nawaiseh. Why Shadows? Real world has shadows More control of the game’s feel  dramatic effects  spooky effects Without shadows the.
Direct Illumination with Lazy Visibility Evaluation David Hart Philip Dutré Donald P. Greenberg Cornell University SIGGRAPH 99.
Projective Texture Atlas for 3D Photography Jonas Sossai Júnior Luiz Velho IMPA.
Technology and Historical Overview. Introduction to 3d Computer Graphics  3D computer graphics is the science, study, and method of projecting a mathematical.
Computer Graphics Computer Graphics is everywhere: Visual system is most important sense: High bandwidth Natural communication Fast developments in Hardware.
A D V A N C E D C O M P U T E R G R A P H I C S CMSC 635 January 15, 2013 Quadric Error Metrics 1/20 Quadric Error Metrics.
CS 638, Fall 2001 Admin Grad student TAs may have had their accounts disabled –Please check and the lab if there is a problem If you plan on graduating.
Computer Graphics World, View and Projection Matrices CO2409 Computer Graphics Week 8.
COLLEGE OF ENGINEERING UNIVERSITY OF PORTO COMPUTER GRAPHICS AND INTERFACES / GRAPHICS SYSTEMS JGB / AAS 1 Shading (Shading) & Smooth Shading Graphics.
09/09/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Event management Lag Group assignment has happened, like it or not.
09/11/03CS679 - Fall Copyright Univ. of Wisconsin Last Time Graphics Pipeline Texturing Overview Cubic Environment Mapping.
1 Introduction to Computer Graphics with WebGL Ed Angel Professor Emeritus of Computer Science Founding Director, Arts, Research, Technology and Science.
CSCE 643 Computer Vision: Structure from Motion
3D Graphics for Game Programming Chapter IV Fragment Processing and Output Merging.
03/24/03© 2003 University of Wisconsin Last Time Image Based Rendering from Sparse Data.
An Efficient Representation for Irradiance Environment Maps Ravi Ramamoorthi Pat Hanrahan Stanford University SIGGRAPH 2001 Stanford University SIGGRAPH.
Geometric Objects and Transformation
Real-time Graphics for VR Chapter 23. What is it about? In this part of the course we will look at how to render images given the constrains of VR: –we.
SVD Data Compression: Application to 3D MHD Magnetic Field Data Diego del-Castillo-Negrete Steve Hirshman Ed d’Azevedo ORNL ORNL-PPPL LDRD Meeting ORNL.
- Laboratoire d'InfoRmatique en Image et Systèmes d'information
Vertices, Edges and Faces By Jordan Diamond. Vertices In geometry, a vertices is a special kind of point which describes the corners or intersections.
Aaron Hertzmann New York University
Review on Graphics Basics. Outline Polygon rendering pipeline Affine transformations Projective transformations Lighting and shading From vertices to.
Graphics Graphics Korea University cgvr.korea.ac.kr 1 7. Speed-up Techniques Presented by SooKyun Kim.
COS429 Computer Vision =++ Assignment 4 Cloning Yourself.
컴퓨터 그래픽스 Real-time Rendering 1. Introduction.
Module 06 –environment mapping Module 06 – environment mapping Module 06 Advanced mapping techniques: Environment mapping.
Non-Linear Kernel-Based Precomputed Light Transport Paul Green MIT Jan Kautz MIT Wojciech Matusik MIT Frédo Durand MIT Henrik Wann Jensen UCSD.
Real-Time Dynamic Shadow Algorithms Evan Closson CSE 528.
Shadows David Luebke University of Virginia. Shadows An important visual cue, traditionally hard to do in real-time rendering Outline: –Notation –Planar.
Stereo March 8, 2007 Suggested Reading: Horn Chapter 13.
CDS 301 Fall, 2008 From Graphics to Visualization Chap. 2 Sep. 3, 2009 Jie Zhang Copyright ©
Graphics, Modeling, and Textures Computer Game Design and Development.
Presented by 翁丞世  View Interpolation  Layered Depth Images  Light Fields and Lumigraphs  Environment Mattes  Video-Based.
Acquiring, Stitching and Blending Diffuse Appearance Attributes on 3D Models C. Rocchini, P. Cignoni, C. Montani, R. Scopigno Istituto Scienza e Tecnologia.
Rendering Pipeline Fall, 2015.
- Introduction - Graphics Pipeline
Ying Zhu Georgia State University
CS4670 / 5670: Computer Vision Kavita Bala Lec 27: Stereo.
Lecture 8:Eigenfaces and Shared Features
Graphics Processing Unit
3D Graphics Rendering PPT By Ricardo Veguilla.
The Graphics Rendering Pipeline
Graphics, Modeling, and Textures
Domain-Modeling Techniques
Epipolar geometry continued
Introduction to Computer Graphics with WebGL
Chapter IX Bump Mapping
Chapter XIV Normal Mapping
Geometric Objects and Transformations (II)
Presentation transcript:

Light Field Mapping: Hardware-Accelerated Visualization of Surface Light Fields

What is a Surface Light Field? 4-D Function – f (r, s, Θ, Φ) 4-D Function – f (r, s, Θ, Φ) Defines radiance of every point on surface of an object in every viewing direction Defines radiance of every point on surface of an object in every viewing direction (r,s) – Describe surface location (r,s) – Describe surface location (Θ, Φ) – Describe viewing location (Θ, Φ) – Describe viewing location In practice, almost always discrete In practice, almost always discrete

Proposed Approach f (r, s, Θ, Φ) ≈ ∑ g k (r,s) h k (Θ, Φ) (eq 1) f (r, s, Θ, Φ) ≈ ∑ g k (r,s) h k (Θ, Φ) (eq 1)

Surface Light Field Approximation Approximation algorithms assume data given as 4-D grid Approximation algorithms assume data given as 4-D grid f (r p, s p, Θ q, Φ q ) f (r p, s p, Θ q, Φ q ) p = 1, …, M – discrete values of surface location p = 1, …, M – discrete values of surface location q = 1, …, N – discrete values of viewing angles q = 1, …, N – discrete values of viewing angles

Surface Light Field Approximation f (r p, s p, Θ q, Φ q ) ≈ ∑ g k (r p,s p ) h k (Θ q, Φ q ) (eq 2) f (r p, s p, Θ q, Φ q ) ≈ ∑ g k (r p,s p ) h k (Θ q, Φ q ) (eq 2) Only practical if # of terms is small Only practical if # of terms is small Difficult to find good approximation to complete SLF using few summation terms Difficult to find good approximation to complete SLF using few summation terms

Surface Light Field Approximation Instead, surface of object partitioned into smaller units Instead, surface of object partitioned into smaller units By decomposing SLF of each unit, close approximation of original data obtained By decomposing SLF of each unit, close approximation of original data obtained Allows for efficient storage and fast rendering Allows for efficient storage and fast rendering

Using Singular Value Decomposition Use SVD to factor SLF Use SVD to factor SLF This method more robust, optimal This method more robust, optimal To apply, 4D SLF must be rearranged into matrix To apply, 4D SLF must be rearranged into matrix

Approximation Through SVD F P = USV T F P = USV T U – square matrix (u k ) U – square matrix (u k ) V – square matrix (v k ) V – square matrix (v k ) S – diagonal matrix (σ k ) S – diagonal matrix (σ k ) USV T = ∑σ k u k v k T (eq 4) USV T = ∑σ k u k v k T (eq 4)

Triangle-Centered Approximation Partition light field function into individual triangles Partition light field function into individual triangles f (r, s, Θ, Φ) = ∑ ∏ ∆i (r,s) f (r, s, Θ, Φ) (eq 5) f (r, s, Θ, Φ) = ∑ ∏ ∆i (r,s) f (r, s, Θ, Φ) (eq 5) Θ – azimuth angle Θ – azimuth angle Φ – elevation angle Φ – elevation angle This is the triangle light field This is the triangle light field When rendered, produces visible discontinuities at triangle edges When rendered, produces visible discontinuities at triangle edges

Vertex-Centered Approximation To eliminate discontinuities, partition SLF around every vertex To eliminate discontinuities, partition SLF around every vertex f (r, s, Θ, Φ) = ∑ Λ vj (r,s) f (r, s, Θ, Φ) (eq 8) f (r, s, Θ, Φ) = ∑ Λ vj (r,s) f (r, s, Θ, Φ) (eq 8) In this method, each triangle shares light field maps with neighboring triangles In this method, each triangle shares light field maps with neighboring triangles

Representation of Light Field Maps 2D texture representation of surface map as G k (s,t) 2D texture representation of surface map as G k (s,t) 2D texture representation of view map as H k (x,y) 2D texture representation of view map as H k (x,y) Texture coordinate computation Texture coordinate computation X=(dx + 1) /2 (eq 13) X=(dx + 1) /2 (eq 13) Y=(dy + 1) /2 Y=(dy + 1) /2

Rendering Algorithm Triangle-centered & vertex centered approaches differ only in how each individual approximation term is evaluated Triangle-centered & vertex centered approaches differ only in how each individual approximation term is evaluated Surface map coordinates for both do not need to be recomputed Surface map coordinates for both do not need to be recomputed View map coordinates recomputed every time view changes View map coordinates recomputed every time view changes

Rendering Algorithm Now evaluate kth approximation term Now evaluate kth approximation term Triangle-centered – multiply pixel-by-pixel image projections of 2 texture fragments Triangle-centered – multiply pixel-by-pixel image projections of 2 texture fragments Vertex-centered – multiply pixel-by-pixel 3 pairs of light field maps from each vertex and add together Vertex-centered – multiply pixel-by-pixel 3 pairs of light field maps from each vertex and add together

Hardware Accelerated Implementation Pixel-by-pixel approach (modulation)– multiplication of surface map fragment by view map fragment Pixel-by-pixel approach (modulation)– multiplication of surface map fragment by view map fragment Multitexturing hardware support allows for effective modulation of two texture fragments in one rendering pass Multitexturing hardware support allows for effective modulation of two texture fragments in one rendering pass K-term approximation K-term approximation K rendering passes for triangle-centered K rendering passes for triangle-centered 3K rendering passes for vertex-centered 3K rendering passes for vertex-centered

Data Acquisition First images are captured under fixed lighting conditions ( images) First images are captured under fixed lighting conditions ( images) Object geometry is computed through structured lighting system consisting of projector and camera(10-20 scans) Object geometry is computed through structured lighting system consisting of projector and camera(10-20 scans) Scans are registered together in same reference frame used for image registration Scans are registered together in same reference frame used for image registration Resulting points fed into mesh editing software Resulting points fed into mesh editing software Finally, mesh is projected onto camera image Finally, mesh is projected onto camera image