Presentation is loading. Please wait.

Presentation is loading. Please wait.

3D Rendering – A beginners guide Phil Carlisle – University of Bolton.

Similar presentations


Presentation on theme: "3D Rendering – A beginners guide Phil Carlisle – University of Bolton."— Presentation transcript:

1 3D Rendering – A beginners guide Phil Carlisle – University of Bolton

2 3D Rendering stages Gross culling Polygon definition Projection into 2D Clipping Rasterisation

3 Gross culling Definition – “To kill off as many polygons as early as possible” So imagine our view through the monitor, essentially it forms a pyramid with the top cut off (the narrow top part being the monitor, the wide base part being the far clip plane) We ideally only want to render those polygons which touch the volume this pyramid forms The pyramid is known as a “view frustum”

4 Gross culling So our task initially, is to reduce the number of polygons we end up rendering by doing something with the set of polygons which are held within this volume The more quickly and crudely we can do this, the better as detailed checks add more wasted time So we start by looking at the world and consider if there is something we can do to eliminate parts of it

5 Gross culling Various spatial techniques – Grid-lattice – Quadtrees – Octrees – BSP Trees with PVS – Bounding volume heirarchy (BVH) – Sphere tree’s – Large occluders Object based tests using hardware Bounding sphere/box tests against frustum

6 Polygon definition By now, we’ve reduced the polygon count We should have as much as possible, eliminated all objects which DO NOT intersect the view frustum We can expect some false positives We now need to actually render those objects But we need to be efficient

7 State management An issue with hardware rendering Various state changes slow down rendering Sort by shader! Sort by texture Render target changes can be expensive Shadows use render targets Deferred rendering seems to be popular

8 State management So we have an “Object”, we then split it into sub- mesh’s Each sub mesh, is basically defined by the material it uses to render with (i.e. the shader its using and the textures that shader uses) We group ALL mesh’s from ALL objects which share the same shader We then do the same within that sorting for the textures We then set the state ONCE and render ALL sub mesh’s from each bucket of sorted types

9 Scene graphs Many renderers use scene graphs as a method of constructing scenes OGRE is a scene graph API Scene graphs are useful for structuring a 3D scene because of their hierarchy They worked well when using the model of API’s like open GL which used a matrix stack NOT exactly the fastest Tom Forsyth says no! (link)

10 Projection into 2D Here we are going to send the 3D data to be processed into 2D data to then draw to the screen BUT! We want to be as efficient as possible Think of a typical 3D model, it is composed of triangles What can we do here to save ourselves some work? Triangle strips/fans

11 Projection into 2D So for each vertex from our mesh, we need to project it from 3D world space, into 2D screen space We apply the matrices: World * camera * projection The projection matrix basically incorporates a “Perspective divide” to create the perspective view, it also deals with aspect ratio and FOV We end up with a 2D screen space coordinate and a depth

12 Clipping So we have our triangles now as 2D triangles in screen space Only some of them might end up cutting the edge of the screen We need to be able to clip them! We can use a guard band arrangement ala PS2 Or we can use some algorithm – Sutherland Hodgeman – Cohen Sutherland

13 Rasterisation So by this stage, we have essentially a clipped set of triangles that we know are within the screen view We need to convert these “points” into filled triangles What data did was pass into the rasterizer? X,Y,Z coords, UV coords, Normals,Colours Its at this point we need to consider our shading model

14 Rasterization Active edge table Interpolate values from the input points across the edges Interpolate from the edges across each scanline Interesting note: In order for this to work, we need the perspective to work across the scanline and edges, that used to be VERY expensive doing the divides, so we fudged it Used a “divide every N pixels” approach Fixed point maths to avoid floating point calculations

15 Where do shaders come in? Vertex shaders – Basically do the projection from world/screen space – Can alter the points values for colour/uv etc Pixel shaders – Once the points are projected and the AET computed, the hardware goes to render the scanline and each pixel of that scanline has its pixel shader called

16 The footoor! Hardware is becoming much more programmable – Intel Larrabee – I assume Nvidia and ATI will quickly respond Which means that many of the older “Software” techniques are back in fashion Not using polygons at all!! Volumes? (Outland?Out…something) Sphere/Ellipsoid

17 Resources Nvidia developer site about DX10 level render optimisation BSP and PVS (Mike Abrash’s “Black Book”) Mike Abrash’s “Zen of Code Optimisation” Interesting about Mike and Intel/Larrabee/Pixomatic


Download ppt "3D Rendering – A beginners guide Phil Carlisle – University of Bolton."

Similar presentations


Ads by Google