Week 8 - Wednesday.  What did we talk about last time?  Textures  Volume textures  Cube maps  Texture caching and compression  Procedural texturing.

Slides:



Advertisements
Similar presentations
Computer Vision Radiometry. Bahadir K. Gunturk2 Radiometry Radiometry is the part of image formation concerned with the relation among the amounts of.
Advertisements

Ray tracing. New Concepts The recursive ray tracing algorithm Generating eye rays Non Real-time rendering.
Bump Mapping CSE 781 Roger Crawfis.
Week 8 - Friday.  What did we talk about last time?  Radiometry  Photometry  Colorimetry  Lighting with shader code  Ambient  Directional (diffuse.
Week 5 - Friday.  What did we talk about last time?  Quaternions  Vertex blending  Morphing  Projections.
CPSC 641 Computer Graphics: Radiometry and Illumination Jinxiang Chai Many slides from Pat Haranhan.
Rendering Outdoor Light Scattering in Real Time
Radiometry. Outline What is Radiometry? Quantities Radiant energy, flux density Irradiance, Radiance Spherical coordinates, foreshortening Modeling surface.
Physically Based Illumination Models
Photo-realistic Rendering and Global Illumination in Computer Graphics Spring 2012 Material Representation K. H. Ko School of Mechatronics Gwangju Institute.
1. What is Lighting? 2 Example 1. Find the cubic polynomial or that passes through the four points and satisfies 1.As a photon Metal Insulator.
Week 9 - Wednesday.  What did we talk about last time?  Fresnel reflection  Snell's Law  Microgeometry effects  Implementing BRDFs  Image based.
Computer graphics & visualization Pre-Computed Radiance Transfer PRT.
IMGD 1001: Illumination by Mark Claypool
Introduction to Shader Programming
(conventional Cartesian reference system)
Computer Graphics (Fall 2008) COMS 4160, Lecture 19: Illumination and Shading 2
1 CSCE 641: Computer Graphics Lighting Jinxiang Chai.
Status – Week 277 Victor Moya.
7M836 Animation & Rendering
Computer Graphics (Fall 2004) COMS 4160, Lecture 16: Illumination and Shading 2 Lecture includes number of slides from.
University of British Columbia CPSC 414 Computer Graphics © Tamara Munzner 1 Shading Week 5, Wed 1 Oct 2003 recap: lighting shading.
Course Website: Computer Graphics 16: Illumination.
Illumination and Direct Reflection Kurt Akeley CS248 Lecture 12 1 November 2007
Illumination Behaviour of light. Shading Overview Classical real-time shading: – vertices projected to screen – lighting calculation done at each vertex.
Computer Science – Game DesignUC Santa Cruz Adapted from Jim Whitehead’s slides Shaders Feb 18, 2011 Creative Commons Attribution 3.0 (Except copyrighted.
SET09115 Intro Graphics Programming
Week 3 - Wednesday.  What did we talk about last time?  Project 1  Graphics processing unit  Programmable shading.
CSE 872 Dr. Charles B. Owen Advanced Computer Graphics1 Illumination and Shading Lights Diffuse and Specular Illumination BasicEffect Setting and Animating.
COMP 261 Lecture 14 3D Graphics 2 of 2. Doing better than 3x3? Translation, scaling, rotation are different. Awkward to combine them. Use homogeneous.
GPU Programming Robert Hero Quick Overview (The Old Way) Graphics cards process Triangles Graphics cards process Triangles Quads.
Geometric Objects and Transformations. Coordinate systems rial.html.
Computer Science 631 Lecture 7: Colorspace, local operations
Week 2 - Wednesday CS361.
A Crash Course in HLSL Matt Christian.
TERRAIN SET09115 Intro to Graphics Programming. Breakdown  Basics  What do we mean by terrain?  How terrain rendering works  Generating terrain 
Week 3 - Monday.  What did we talk about last time?  Graphics rendering pipeline  Rasterizer Stage.
Week 6 - Wednesday.  What did we talk about last time?  Light  Material  Sensors.
Week 10 - Wednesday.  What did we talk about last time?  Shadow volumes and shadow mapping  Ambient occlusion.
Computer Graphics The Rendering Pipeline - Review CO2409 Computer Graphics Week 15.
Advanced Computer Graphics Advanced Shaders CO2409 Computer Graphics Week 16.
CSE 381 – Advanced Game Programming GLSL Lighting.
Computer Graphics: Programming, Problem Solving, and Visual Communication Steve Cunningham California State University Stanislaus and Grinnell College.
BUMP-MAPPING SET09115 Intro to Graphics Programming.
Illumination and Shading
CSCE 641 Computer Graphics: Reflection Models Jinxiang Chai.
Local Illumination and Shading
Day 06 Vertex Shader for Ambient-Diffuse-Specular Lighting.
Current Student – University of Wisconsin – Stout Applied Mathematics and Computer Science: Software Development Associate Degree in Computer Programming.
1 CSCE 441: Computer Graphics Lighting Jinxiang Chai.
UW EXTENSION CERTIFICATE PROGRAM IN GAME DEVELOPMENT 2 ND QUARTER: ADVANCED GRAPHICS Lighting.
1 CSCE 441: Computer Graphics Lighting Jinxiang Chai.
Computer Graphics Lecture 30 Mathematics of Lighting and Shading - IV Taqdees A. Siddiqi
1 CSCE 441: Computer Graphics Lighting Jinxiang Chai.
Computer Graphics Ken-Yi Lee National Taiwan University (the slides are adapted from Bing-Yi Chen and Yung-Yu Chuang)
Computer Science – Game DesignUC Santa Cruz Tile Engine.
Computer Graphics: Illumination
CS580: Radiometry Sung-Eui Yoon ( 윤성의 ) Course URL:
- Introduction - Graphics Pipeline
Week 8 - Monday CS361.
Week 3 - Monday CS361.
Week 8 - Wednesday CS361.
Deferred Lighting.
CS451Real-time Rendering Pipeline
Chapter 14 Shading Models.
Digital Image Synthesis Yung-Yu Chuang 10/19/2006
Chapter IX Lighting.
Computer Graphics Introduction to Shaders
Game Programming Algorithms and Techniques
Chapter 14 Shading Models.
Presentation transcript:

Week 8 - Wednesday

 What did we talk about last time?  Textures  Volume textures  Cube maps  Texture caching and compression  Procedural texturing  Texture animation  Material mapping  Alpha mapping  Bump mapping  Normal maps  Parallax mapping  Relief mapping  Heightfield texturing

 Radiometry is the measurement of electromagnetic radiation (for us, specifically light)  Light is the flow of photons  We'll generally think of photons as particles, rather than waves  Photon characteristics  Frequency ν = c/λ (Hertz)  Wavelength λ = c/ν (meters)  Energy Q = hν (joules) [h is Planck's constant]

 We'll be interested in the following radiometric quantities QuantityUnit Radiant energyjoule (J) Radiant fluxwatt (W) IrradianceW/m 2 Radiant intensityW/sr RadianceW/(m 2 sr)

 Radiant flux: energy per unit time (power)  Irradiance: energy per unit time through a surface  Intensity: energy per unit time per steradian

 The radiance L is what we care about since that's what sensors detect  We can think of radiance as the portion of irradiance within a solid angle  Or, we can think of radiance as the portion of a light's intensity that flow through a surface  Radiance doesn't change with distance

 Radiometry just deals with physics  Photometry takes everything from radiometry and weights it by the sensitivity of the human eye  Photometry is just trying to account for the eye's differing sensitivity to different wavelengths

 Because they're just rescalings of radiometric units, every photometric unit is based on a radiometric one  Luminance is often used to describe the brightness of surfaces, such as LCD screens Radiometric Quantity UnitPhotometric Quantity Unit Radiant energyjoule (J)Luminous energytalbot Radiant fluxwatt (W)Luminous fluxlumen IrradianceW/m 2 Illuminancelux Radiant intensityW/srLuminous intensitycandela RadianceW/(m 2 sr)Luminancenit

 Colorimetry is the science of quantifying human color perception  The CIE defined a system of three non-monochromatic colors X, Y, and Z for describing the human perceivable color space  RGB is a transform from these values into monochromatic red, green, and blue colors  RGB can only express colors in the triangle  As you know, there are others (HSV, HSL, etc.)

 Real light behaves consistently (but in a complex way)  For rendering purposes, we often divide light into categories that are easy to model  Directional lights (like the sun)  Omni lights (located at a point, but evenly illuminate in all directions)  Spotlights (located at a point and have intensity that varies with direction)  Textured lights (give light projections variety in shape or color) ▪ Similar to gobos, if you know anything about stage lighting

 With a programmable pipeline, you can express lighting models of limitless complexity  The old DirectX fixed function pipeline provided a few stock lighting models  Ambient lights  Omni lights  Spotlights  Directional lights  All lights have diffuse, specular, and ambient color  Let's see how to implement these lighting models with shaders

 Ambient lights are very simple to implement in shaders  We've already seen the code  The vertex shader must simply transform the vertex into clip space (world x view x projection)  The pixel shader colors each fragment a constant color  We could modulate this by a texture if we were using one

float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity; struct VertexShaderInput { float4 Position : SV_Position; }; struct VertexShaderOutput { float4 Position : SV_Position; }; float4x4 World; float4x4 View; float4x4 Projection; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity; struct VertexShaderInput { float4 Position : SV_Position; }; struct VertexShaderOutput { float4 Position : SV_Position; };

VertexShaderOutput VertexShaderFunction( VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); return output; } VertexShaderOutput VertexShaderFunction( VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); return output; }

float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { return AmbientColor * AmbientIntensity; } float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { return AmbientColor * AmbientIntensity; } technique Ambient { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); } technique Ambient { pass Pass1 { VertexShader = compile vs_2_0 VertexShaderFunction(); PixelShader = compile ps_2_0 PixelShaderFunction(); }

 Directional lights model lights from a very long distance with parallel rays, like the sun  It only has color (specular and diffuse) and direction  They are virtually free from a computational perspective  Directional lights are also the standard model for BasicEffect  You don't have to use a shader to do them  Let's look at a diffuse shader first

 We add values for the diffuse light intensity and direction  We add a WorldInverseTranspose to transform the normals  We also add normals to our input and color to our output float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 DiffuseLightDirection = float3(1, 1, 0); float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; struct VertexShaderInput { float4 Position : SV_POSITION; float3 Normal : NORMAL; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float4 Color : COLOR; }; float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 DiffuseLightDirection = float3(1, 1, 0); float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; struct VertexShaderInput { float4 Position : SV_POSITION; float3 Normal : NORMAL; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float4 Color : COLOR; };

 Color depends on the surface normal dotted with the light vector VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float3 normal = mul(input.Normal, (float3x3)WorldInverseTranspose); float lightIntensity = dot(normalize(normal), normalize(DiffuseLightDirection)); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); return output; } VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float3 normal = mul(input.Normal, (float3x3)WorldInverseTranspose); float lightIntensity = dot(normalize(normal), normalize(DiffuseLightDirection)); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); return output; }

 No real differences here  The diffuse color and ambient colors are added together  The technique is exactly the same float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { return saturate(input.Color + AmbientColor * AmbientIntensity); } float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { return saturate(input.Color + AmbientColor * AmbientIntensity); }

 Adding a specular component to the diffuse shader requires incorporating the view vector  It will be included in the shader file and be set as a parameter in the C# code

 The camera location is added to the declarations  As are specular colors and a shininess parameter float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 Camera; static const float PI = f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 DiffuseLightDirection = float3(1, 1, 0); float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5; float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 Camera; static const float PI = f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1; float3 DiffuseLightDirection = float3(1, 1, 0); float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5;

 The output adds a normal so that the half vector can be computed in the pixel shader  A world position lets us compute the view vector to the camera struct VertexShaderInput { float4 Position : SV_POSITION; float3 Normal : NORMAL; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float4 Color : COLOR; float3 Normal : NORMAL; float4 WorldPosition : POSITIONT; }; struct VertexShaderInput { float4 Position : SV_POSITION; float3 Normal : NORMAL; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float4 Color : COLOR; float3 Normal : NORMAL; float4 WorldPosition : POSITIONT; };

 The same computations as the diffuse shader, but we store the normal and the transformed world position in the output VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); output.WorldPosition = worldPosition; float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float3 normal = normalize(mul(input.Normal, (float3x3)WorldInverseTranspose)); float lightIntensity = dot(normal, normalize(DiffuseLightDirection)); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); output.Normal = normal; return output; } VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); output.WorldPosition = worldPosition; float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float3 normal = normalize(mul(input.Normal, (float3x3)WorldInverseTranspose)); float lightIntensity = dot(normal, normalize(DiffuseLightDirection)); output.Color = saturate(DiffuseColor * DiffuseIntensity * lightIntensity); output.Normal = normal; return output; }

 Here we finally have a real computation because we need to use the pixel normal (which is averaged from vertices) in combination with the view vector  The technique is the same float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { float3 light = normalize(DiffuseLightDirection); float3 normal = normalize(input.Normal); float3 reflect = normalize(2 * dot(light, normal) * normal – light); float3 view = normalize(input.WorldPosition - Camera); float dotProduct = dot(reflect, view); float4 specular = (8 + Shininess) / (8 * PI) * SpecularIntensity * SpecularColor * max(pow(dotProduct, Shininess), 0) * length(input.Color); return saturate(input.Color + AmbientColor * AmbientIntensity + specular); } float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { float3 light = normalize(DiffuseLightDirection); float3 normal = normalize(input.Normal); float3 reflect = normalize(2 * dot(light, normal) * normal – light); float3 view = normalize(input.WorldPosition - Camera); float dotProduct = dot(reflect, view); float4 specular = (8 + Shininess) / (8 * PI) * SpecularIntensity * SpecularColor * max(pow(dotProduct, Shininess), 0) * length(input.Color); return saturate(input.Color + AmbientColor * AmbientIntensity + specular); }

 Point lights model omni lights at a specific position  They generally attenuate (get dimmer) over a distance and have a maximum range  DirectX has a constant attenuation, linear attenuation, and a quadratic attenuation  You can choose attenuation levels through shaders  They are more computationally expensive than directional lights because a light vector has to be computed for every pixel  It is possible to implement point lights in a deferred shader, lighting only those pixels that actually get used

 We add light position float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 LightPosition; float3 Camera; static const float PI = f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1f; float LightRadius = 50; float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5f; float4x4 World; float4x4 View; float4x4 Projection; float4x4 WorldInverseTranspose; float3 LightPosition; float3 Camera; static const float PI = f; float4 AmbientColor = float4(1, 1, 1, 1); float AmbientIntensity = 0.1f; float LightRadius = 50; float4 DiffuseColor = float4(1, 1, 1, 1); float DiffuseIntensity = 0.7; float Shininess; float4 SpecularColor = float4(1, 1, 1, 1); float SpecularIntensity = 0.5f;

 We no longer need color in the output  We do need the vector to the camera from the location  We keep the world location at that fragment struct VertexShaderInput { float4 Position : SV_POSITION; float3 Normal : NORMAL; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float4 WorldPosition : POSITIONT; float3 Normal : NORMAL; }; struct VertexShaderInput { float4 Position : SV_POSITION; float3 Normal : NORMAL; }; struct VertexShaderOutput { float4 Position : SV_POSITION; float4 WorldPosition : POSITIONT; float3 Normal : NORMAL; };

 We compute the normal and the world position VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); output.WorldPosition = worldPosition; float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float3 normal = normalize(mul(input.Normal, (float3x3)WorldInverseTranspose)); output.Normal = normal; return output; } VertexShaderOutput VertexShaderFunction(VertexShaderInput input) { VertexShaderOutput output; float4 worldPosition = mul(input.Position, World); output.WorldPosition = worldPosition; float4 viewPosition = mul(worldPosition, View); output.Position = mul(viewPosition, Projection); float3 normal = normalize(mul(input.Normal, (float3x3)WorldInverseTranspose)); output.Normal = normal; return output; }

 Lots of junk in here float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { float3 normal = normalize(input.Normal); float3 lightDirection = LightPosition – (float3)input.WorldPosition; float intensity = pow(1.0f – saturate(length(lightDirection)/LightRadius), 2); lightDirection = normalize(lightDirection); //normalize after float3 view = normalize(Camera - (float3)input.WorldPosition); float diffuseColor = dot(normal, lightDirection); float3 reflect = normalize(2 * diffuseColor * normal – lightDirection); float dotProduct = dot(reflect, view); float specular = (8 + Shininess) / (8 * PI) * SpecularIntensity * SpecularColor * max(pow(dotProduct, Shininess), 0) * length(diffuseColor); return saturate(diffuseColor + AmbientColor * AmbientIntensity + specular); } float4 PixelShaderFunction(VertexShaderOutput input) : SV_Target { float3 normal = normalize(input.Normal); float3 lightDirection = LightPosition – (float3)input.WorldPosition; float intensity = pow(1.0f – saturate(length(lightDirection)/LightRadius), 2); lightDirection = normalize(lightDirection); //normalize after float3 view = normalize(Camera - (float3)input.WorldPosition); float diffuseColor = dot(normal, lightDirection); float3 reflect = normalize(2 * diffuseColor * normal – lightDirection); float dotProduct = dot(reflect, view); float specular = (8 + Shininess) / (8 * PI) * SpecularIntensity * SpecularColor * max(pow(dotProduct, Shininess), 0) * length(diffuseColor); return saturate(diffuseColor + AmbientColor * AmbientIntensity + specular); }

 BRDFs  Implementing BRDFs  Texture mapping in shaders

 Finish reading Chapter 7  Summer REU opportunity:  Machine learning at the Florida Institute of Technology  Deadline March 31, 2015 