Presentation is loading. Please wait.

Presentation is loading. Please wait.

9.2. O THER NOTABLE AI A SPECTS / HLSL I NTRO Common board game AI approaches and Strategic AI.

Similar presentations


Presentation on theme: "9.2. O THER NOTABLE AI A SPECTS / HLSL I NTRO Common board game AI approaches and Strategic AI."— Presentation transcript:

1 9.2. O THER NOTABLE AI A SPECTS / HLSL I NTRO Common board game AI approaches and Strategic AI

2 Brief introduction to tactical and strategic AI Execution Management World Interface Moveme nt Strat egy Decision Making AnimationPhysics

3 Tactical and strategic AI encompasses a wide range of algorithms that try to: derive a tactical assessment of some situation, possibly using incomplete or probabilistic information Use tactical assessments to make decisions and coordinate the behaviour of multiple characters Aside: Not every genre of game needs tactical and/or strategic forms of AI

4 A waypoint is simply a position in the game world. As with path-finding waypoints (holding path-finding information, e.g. terrain cost, etc.), tactical waypoints hold tactical information, e.g.: Cover points Reconnaissance/sniper locations Shadowed locations Power-up spawn points Exposed locations

5 Tactical locations can be either set by the designer or derived from game data or analytical algorithms. Tactical nodes can be combined with pathfinding nodes to provide tactically aware pathfinding.

6 Influence mapping is widely used in strategy games to map the influence/strength of each side. The game world is split into chunks (tile-based is a common representation). Each chunk is assigned an influence score based on the combined balance of influence ‘emitted’ by game objects that can effect that chunk. The influence map can be used to identify points of weakness and strength and, from this, drive strategic goal selection.

7 The influence exerted on a particular area can depend upon the proximity of game objects (e.g. mobile units or stationary bases), type of surrounding terrain (e.g. a mountain range may ‘prevent’ influence passing), side specific factors (e.g. current financial or happiness state), etc. In most games, influence emitted by a game object decays over distance (e.g. using a linear drop-off alongside a defined maximum influence range). 1 5 4 4 4 4 4 3 3 33 3 3 33 2 1 2 1 1 11 1 1 1 12 1 4 5 44 4 4 1 1 1 3 3 1 2 22 +1 32 2 2 22 2 1 1 1 1 1

8 Overview of approaches enabling jumping in games Execution Management World Interface Moveme nt Strat egy Decision Making AnimationPhysics

9 Unlike other forms of steering behaviour, jumps are inherently risky (i.e. they can fail, possibly with ‘fatal’ consequences). To jump, the character must be moving at the right speed and in the right direction and ensure the jump is executed at the right time. Also, steering behaviours typically re- evaluate decisions several times/second, correcting small mistakes. A jump action is a one-time, single event, fail-sensitive decision.

10 The most simple approach is to place jump points into the game level. If characters can move with different speeds, then the jump point also needs a minimum jump speed. The character can then seek towards the jump pad, matching the specified speed, and jump whenever it is on the jump pad. Minimum jump velocity

11 Some forms of jump require a defined target speed or a defined direction/angle of approach to the jump pad. Precise jump speed needed Additionally, some jumps may have a higher ‘price’ of failure (e.g. ‘death’ vs. a short delay to climb-up). Precise jump direction needed Aside: Such information can be incorporated into the jump point, but it is difficult to extensively test.

12 A good approach is to pair a jump pad with a landing pad. Doing this permits the game object to determine the needed speed and direction (by solving the trajectory equations). This approach is more flexible (different characters can differ in their movement approach) and is less prone to error.

13 Introduction to HLSL

14 Early versions of the DirectX and OpenGL APIs defined a number of fixed rendering stages. This forced all games to use the same approach with only a few parameters open to change.

15 Application As GPUs increased in capability it became possible to inject small programs (called shaders) allowing the application to have greater control. A list of vertices (points) are sent to the vertex shader. In the rasterisation stage primitives are constructed from output vertices. The primitives are then rasterized (i.e. the onscreen pixels determined). Vertex attributes are interpolated between the pixels. The pixel shader determines the on- screen colour of each pixel. Vertices Vertex Shader Pixel Shader Rasterisation / Interpolation Z-buffer test Frame buffer To screen

16 Shaders are small programs that run on the GPU. Different shader languages are available. Vertex Shader The vertex shader can set/change rendered vertices, e.g. object deformation, skeletal animation, particle motion, etc. Pixel Shader The pixel shader sets the colour of the pixel, e.g. for per-pixel lighting, texturing. Can also be used to apply effects over an entire scene, e.g. bloom, depth of field blur, etc. Aside: DirectX 10 also supports geometry shaders (not supported by XNA).

17 HLSL is a shading language developed by Microsoft for the Direct3D API. HLSL offers a number of functions (mostly centred around branching control, math functions and texture access). Aside: See http://msdn2.microsoft.com/en- us/library/bb509638.aspx for a complete HLSL reference To do: Decide if you want to explore this

18 HLSL supports the shown scalar data- types. Note: vectors/matrices forms can also be defined, e.g. float3, int2x2, double4x4, etc. HLSL also provides a sampler type (used to read, i.e. sample, textures): sampler, sampler1D, sampler2D, and sampler3D. The sampler type is defined using a number of different states, e.g. MinFilter, MagFilter, and MipFilter controlling texture filtering, and AddressU, AddressV, and AddressW controlling addressing states. texture textureName; sampler2D textureSampler = sampler_state { Texture = textureName; MinFilter = Linear; MagFilter = Linear; MipFilter = Linear; AddressU = Wrap; AddressV = Wrap; AddressW = Wrap; }

19 Semantics are used to map input and output data to variables. All varying input data (from the application or between rendering stages) requires a semantic tag, e.g. all outputs from the vertex shader must be semantically tagged. Above right note: [n] is an optional integer that provides support for multiple data types, e.g. Texture0, Texture1, Texture2. float4 vertexPosition : POSITION0; Aside: The only valid semantic inputs to the pixel shader are Color[n] and Texture[n]. Often, custom data (i.e. non-texture addressing) is passed using a Texture[n] semantic.

20 HLSL permits C like functions to be specified. A shader must define at least one vertex function that will consider vertex information and at least one pixel function that will consider pixel colours. These functions must define their inputs and outputs with semantics. Intrinsic Functions HLSL offers a set of ‘built-in’ functions, mostly centred around flow control, math operations and texture access. float2 CalculateParallaxOffset( float3 view, float2 texCoord ) { view = normalize(view); float height = parallaxScale * (tex2D(HeightSampler, texCoord).r ) + parallaxOffset; float2 viewOffset = view.xy * (height); return viewOffset; }

21 The following is a simple example shader pixelShaderInput SimpleVS (vertexShaderInput input) { pixelShaderInput ouput; output. screenPosition = mul(input.position, wvpMatrix ); output.colour = float3(1.0f, 1.0f, 1.0f); return output; } float4 SimplePS (pixelShaderInput input) : Colour0 { return float4(input.colour.rgb, 1.0f); } technique SimpleShader { pass { VertexShader = compile vs_1_1 SimpleVS(); PixelShader = compile ps_1_1 SimplePS(); } } float4x4 wvpMatrix : WorldViewProjection; struct vertexShaderInput { float4 vertexPosition : Position0; }; struct pixelShaderInput { float4 screenPosition : Position; float3 colour : Color0; }; Define world-view-projection matrix Vertex shader function Transform from model space to screen space Pixel shader function Output pixel colour Technique definition – specifying VS and PS functions and compile type Define input structure expected by the vertex shader Define input structure expected by the pixel shader (and also output by the vertex shader)

22 Effects in XNA are types of game asset (alongside textures and models). The Effect class represents an effect, permitting effect parameter configuration, technique selection, and actual rendering. An effect can be loaded/configured as shown. Effect effect; effect = content.Load (“effectName"); effect.CurrentTechnique = lightEffect.Techniques[“techique"]; effect.Parameters[“colour"].SetValue(Vector3.One); effect.Parameters[“tolerance"].SetValue(0.8f); effect.Begin(); foreach ( EffectPass pass in effect.CurrentTechnique.Passes) { pass.Begin(); // Send vertex information to the effect, e.g. //graphicsDevice.DrawUserIndexedPrimitives (PrimitiveType.TriangleList,... ); pass.End(); } effect.End(); Load the effect Select the desired technique Aside: For better performance, the effect.Parameter[“name”] can be stored as an EffectParameter object (upon effect construction), and the setValue(...) method called on the parameter. Define effect parameters Begin effect and iterate over each pass End the effect Vertex information can be sent using other approaches

23 To do: Complete Question Clinic Consider if material is of use within your game. Complete section in project document on Alpha hand-in Roundup work for the alpha hand-in at the end of this week. Today we explored: Brief intro to some types of strategic/ tactical AI Overview of how jumps can be supported within games HLSL and effect usage in XNA


Download ppt "9.2. O THER NOTABLE AI A SPECTS / HLSL I NTRO Common board game AI approaches and Strategic AI."

Similar presentations


Ads by Google