Download presentation
Presentation is loading. Please wait.
1
CSCE 590E Spring 2007 AI By Jijun Tang
2
Announcements April 16 th /18 th : demos Show progress/difficulties/change of plans USC Times will have reporters in the class High school outreach Anyone can contact their high school admin to arrange direct talks to students Of course, in SC only
3
Homework P 655 questions 1 and 2 5 points total Due April 16th
4
Motion Extraction Moving the Game Instance Linear Motion Extraction Composite Motion Extraction Variable Delta Extraction The Synthetic Root Bone Animation Without Rendering
5
Moving the Game Instance Game instance is where the game thinks the object (character) is Usually just pos, orientation and bounding box Used for everything except rendering Collision detection Movement It’s what the game is! Must move according to animations
6
Linear Motion Extraction Find position on last frame of animation Subtract position on first frame of animation Divide by duration Subtract this motion from animation frames During animation playback, add this delta velocity to instance position Animation is preserved and instance moves Do same for orientation
7
Composite Motion Extraction Approximates motion with circular arc Pre-processing algorithm finds: Axis of rotation (vector) Speed of rotation (radians/sec) Linear speed along arc (metres/sec) Speed along axis of rotation (metres/sec) e.g. walking up a spiral staircase
8
Variable Delta Extraction Uses root bone motion directly Sample root bone motion each frame Find delta from last frame Apply to instance pos+orn Root bone is ignored when rendering Instance pos+orn is the root bone
9
The Synthetic Root Bone All three methods use the root bone But what is the root bone? Where the character “thinks” they are Defined by animators and coders Does not match any physical bone Can be animated completely independently Therefore, “synthetic root bone” or SRB
10
Animation Without Rendering Not all objects in the world are visible But all must move according to anims Make sure motion extraction and replay is independent of rendering Must run on all objects at all times Needs to be cheap! Use LME & CME when possible VDA when needed for complex animations
11
Mesh Deformation Find Bones in World Space Find Delta from Rest Pose Deform Vertex Positions Deform Vertex Normals
12
Example
13
Find Delta from Rest Pose Mesh is created in a pose Often the “da Vinci man” pose for humans Called the “rest pose” Must un-transform by that pose first Then transform by new pose Multiply new pose transforms by inverse of rest pose transforms Inverse of rest pose calculated at mesh load time Gives “delta” transform for each bone
14
Deform Vertex Positions Each vertex has several bones affect it (the number is generally set to <=4). Vertices each have n bones n is usually 4 4 bone indices 4 bone weights 0-1 Weights must sum to 1 Deformation usually performed on GPU Delta transforms fed to GPU Usually stored in “constant” space
15
Deform Vertex Normals Normals are important for shading and are done similarly to positions When transformed, normals must be transformed by the inverse transpose of the transform matrix Translations are ignored For pure rotations, inverse(A)=transpose(A) So inverse(transpose(A)) = A For scale or shear, they are different Normals can use fewer bones per vertex Just one or two is common
16
Inverse Kinematics FK & IK Single Bone IK Multi-Bone IK Cyclic Coordinate Descent Two-Bone IK IK by Interpolation
17
Single Bone IK Orient a bone in given direction Eyeballs Cameras Find desired aim vector Find current aim vector Find rotation from one to the other Cross-product gives axis Dot-product gives angle Transform object by that rotation
18
Multi-Bone IK One bone must get to a target position Bone is called the “end effector” Can move some or all of its parents May be told which it should move first Move elbow before moving shoulders May be given joint constraints Cannot bend elbow backwards
20
Two-Bone IK Direct method, not iterative Always finds correct solution If one exists Allows simple constraints Knees, elbows Restricted to two rigid bones with a rotation joint between them Knees, elbows! Can be used in a cyclic coordinate descent
21
IK by Interpolation Animator supplies multiple poses Each pose has a reference direction e.g. direction of aim of gun Game has a direction to aim in Blend poses together to achieve it Source poses can be realistic As long as interpolation makes sense Result looks far better than algorithmic IK with simple joint limits
22
Network and Multiplayer
23
Multiplayer Modes: Event Timing Turn-Based Easy to implement Any connection type Real-Time Difficult to implement Latency sensitive
24
Protocol Stack: Open System Interconnect
25
Real-Time Communications: Peer to Peer vs. Client/Server BroadcastPeer/PeerClient/Server Connections0 Client = 1 Server = N N = Number of players BroadcastPeer/PeerClient/Server Send1N-1 Client = 1 Server = N ReceiveN-1 Client = 1 Server = N
26
Security: Encryption Methods Keyed Public Key Private Key Ciphers Message Digest Certificates IPSec
27
Security: Copy Protection Disk Copy Protection Costly Mastering, delay copies to ensure first several months ’ sale Invalid/Special Sector Read Code Sheets Ask code from a line in a large manual Watermarking
28
Privacy Critical data should be kept secret and strong encrypted: Real name Password Address/phone/email Billing Age (especially for minors) Using public key for transforming user name and password
29
Artificial Intelligence: Agents, Architecture, and Techniques
30
Book Material The book CD has a lot of material in the chapter content A state machine language for example Please try it
31
Artificial Intelligence Intelligence embodied in a man-made device Human level AI still unobtainable The difficulty is comprehension
32
Game Artificial Intelligence: What is considered Game AI? Is it any NPC (non-player character) behavior? A single “ if ” statement? Scripted behavior? Pathfinding? Animation selection? Automatically generated environment? Best shot at a definition of game AI?
33
Possible Game AI Definition Inclusive view of game AI: “ Game AI is anything that contributes to the perceived intelligence of an entity, regardless of what ’ s under the hood. ”
34
Goals of an AI Game Programmer Different than academic or defense industry 1. AI must be intelligent, yet purposely flawed 2. AI must have no unintended weaknesses 3. AI must perform within the constraints 4. AI must be configurable by game designers or players 5. AI must not keep the game from shipping
35
Specialization of Game AI Developer No one-size fits all solution to game AI Results in dramatic specialization Strategy Games Battlefield analysis Long term planning and strategy First-Person Shooter Games One-on-one tactical analysis Intelligent movement at footstep level Real-Time Strategy games the most demanding, with as many as three full-time AI game programmers
36
Game Agents May act as an Opponent Ally Neutral character Continually loops through the Sense-Think-Act cycle Optional learning or remembering step
37
Sense-Think-Act Cycle: Sensing Agent can have access to perfect information of the game world May be expensive/difficult to tease out useful info Game World Information Complete terrain layout Location and state of every game object Location and state of player But isn ’ t this cheating???
38
Sensing: Enforcing Limitations Human limitations? Limitations such as Not knowing about unexplored areas Not seeing through walls Not knowing location or state of player Can only know about things seen, heard, or told about Must create a sensing model
39
Sensing: Human Vision Model for Agents Get a list of all objects or agents; for each: 1. Is it within the viewing distance of the agent? How far can the agent see? What does the code look like? 2. Is it within the viewing angle of the agent? What is the agent ’ s viewing angle? What does the code look like? 3. Is it unobscured by the environment? Most expensive test, so it is purposely last What does the code look like?
40
Sensing: Vision Model Isn ’ t vision more than just detecting the existence of objects? What about recognizing interesting terrain features? What would be interesting to an agent?
41
Sensing: Human Hearing Model Humans can hear sounds Can recognize sounds Knows what emits each sound Can sense volume Indicates distance of sound Can sense pitch Sounds muffled through walls have more bass Can sense location Where sound is coming from
42
Sensing: Modeling Hearing How do you model hearing efficiently? Do you model how sounds reflect off every surface? How should an agent know about sounds?
43
Sensing: Modeling Hearing Efficiently Event-based approach When sound is emitted, it alerts interested agents Use distance and zones to determine how far sound can travel
44
Sensing: Communication Agents might talk amongst themselves! Guards might alert other guards Agents witness player location and spread the word Model sensed knowledge through communication Event-driven when agents within vicinity of each other
45
Sensing: Reaction Times Agents shouldn ’ t see, hear, communicate instantaneously Players notice! Build in artificial reaction times Vision: ¼ to ½ second Hearing: ¼ to ½ second Communication: > 2 seconds
46
Sense-Think-Act Cycle: Thinking Sensed information gathered Must process sensed information Two primary methods Process using pre-coded expert knowledge Use search to find an optimal solution
47
Thinking: Expert Knowledge Many different systems Finite-state machines Production systems Decision trees Logical inference Encoding expert knowledge is appealing because it ’ s relatively easy Can ask just the right questions As simple as if-then statements Problems with expert knowledge Not very scalable
48
Finite-state machine (FSM)
49
Production systems Consists primarily of a set of rules about behavior Productions consist of two parts: a sensory precondition (or "IF" statement) and an action (or "THEN") A production system also contains a database about current state and knowledge, as well as a rule interpreter
50
Decision trees
51
Logical inference Process of derive a conclusion solely based on what one already knows Prolog (programming in logic) mortal(X) :- man(X). man(socrates). ?- mortal(socrates). Yes
52
Thinking: Search Employs search algorithm to find an optimal or near-optimal solution Branch-and-bound Depth-first Breadth-first A* pathfinding common use of search Kind of mixed
53
Depth and breadth-first
54
Thinking: Machine Learning If imparting expert knowledge and search are both not reasonable/possible, then machine learning might work Examples: Reinforcement learning Neural networks Decision tree learning Not often used by game developers Why?
55
Thinking: Flip-Flopping Decisions Must prevent flip-flopping of decisions Reaction times might help keep it from happening every frame Must make a decision and stick with it Until situation changes enough Until enough time has passed
56
Sense-Think-Act Cycle: Acting Sensing and thinking steps invisible to player Acting is how player witnesses intelligence Numerous agent actions, for example: Change locations Pick up object Play animation Play sound effect Converse with player Fire weapon
57
Acting: Showing Intelligence Adeptness and subtlety of actions impact perceived level of intelligence Enormous burden on asset generation Agent can only express intelligence in terms of vocabulary of actions Current games have huge sets of animations/assets Must use scalable solutions to make selections
58
Extra Step in Cycle: Learning and Remembering Optional 4 th step Not necessary in many games Agents don ’ t live long enough Game design might not desire it
59
Learning Remembering outcomes and generalizing to future situations Simplest approach: gather statistics If 80% of time player attacks from left Then expect this likely event Adapts to player behavior
60
Remembering Remember hard facts Observed states, objects, or players For example Where was the player last seen? What weapon did the player have? Where did I last see a health pack? Memories should fade Helps keep memory requirements lower Simulates poor, imprecise, selective human memory
61
Remembering within the World All memory doesn ’ t need to be stored in the agent – can be stored in the world For example: Agents get slaughtered in a certain area Area might begin to “ smell of death ” Agent ’ s path planning will avoid the area Simulates group memory
62
Making Agents Stupid Sometimes very easy to trounce player Make agents faster, stronger, more accurate Sometimes necessary to dumb down agents, for example: Make shooting less accurate Make longer reaction times Engage player only one at a time Change locations to make self more vulnerable
63
Agent Cheating Players don ’ t like agent cheating When agent given unfair advantage in speed, strength, or knowledge Sometimes necessary For highest difficultly levels For CPU computation reasons For development time reasons Don ’ t let the player catch you cheating! Consider letting the player know upfront
64
Finite-State Machine (FSM) Abstract model of computation Formally: Set of states A starting state An input vocabulary A transition function that maps inputs and the current state to a next state
65
FSM
66
In Game Development Deviate from formal definition 1. States define behaviors (containing code) Wander, Attack, Flee 2. Transition function divided among states Keeps relation clear 3. Blur between Moore (within state) and Mealy machines (transitions) 4. Leverage randomness 5. Extra state information, for example, health
67
Good and Bad Most common game AI software pattern Natural correspondence between states and behaviors Easy to diagram Easy to program Easy to debug Completely general to any problem Problems Explosion of states Often created with ad hoc structure
68
Finite-State Machine: UML Diagram
69
Approaches Three approaches Hardcoded (switch statement) Scripted Hybrid Approach
70
Hardcoded FSM void RunLogic( int * state ) { switch( state ) { case 0: //Wander Wander(); if( SeeEnemy() ) { *state = 1; } break; case 1: //Attack Attack(); if( LowOnHealth() ) { *state = 2; } if( NoEnemy() ) { *state = 0; } break; case 2: //Flee Flee(); if( NoEnemy() ) { *state = 0; } break; }
71
Problems with switch FSM 1. Code is ad hoc Language doesn ’ t enforce structure 2. Transitions result from polling Inefficient – event-driven sometimes better 3. Can ’ t determine 1 st time state is entered 4. Can ’ t be edited or specified by game designers or players
72
Scripted with alternative language AgentFSM { State( STATE_Wander ) OnUpdate Execute( Wander ) if( SeeEnemy ) SetState( STATE_Attack ) OnEvent( AttackedByEnemy ) SetState( Attack ) State( STATE_Attack ) OnEnter Execute( PrepareWeapon ) OnUpdate Execute( Attack ) if( LowOnHealth ) SetState( STATE_Flee ) if( NoEnemy ) SetState( STATE_Wander ) OnExit Execute( StoreWeapon ) State( STATE_Flee ) OnUpdate Execute( Flee ) if( NoEnemy ) SetState( STATE_Wander ) }
73
Scripting Advantages 1. Structure enforced 2. Events can be handed as well as polling 3. OnEnter and OnExit concept exists 4. Can be authored by game designers Easier learning curve than straight C/C++
74
Scripting Disadvantages Not trivial to implement Several months of development Custom compiler With good compile-time error feedback Bytecode interpreter With good debugging hooks and support Scripting languages often disliked by users Can never approach polish and robustness of commercial compilers/debuggers
75
Hybrid Approach Use a class and C-style macros to approximate a scripting language Allows FSM to be written completely in C++ leveraging existing compiler/debugger Capture important features/extensions OnEnter, OnExit Timers Handle events Consistent regulated structure Ability to log history Modular, flexible, stack-based Multiple FSMs, Concurrent FSMs Can ’ t be edited by designers or players
76
Extensions Many possible extensions to basic FSM OnEnter, OnExit Timers Global state, substates Stack-Based (states or entire FSMs) Multiple concurrent FSMs Messaging
77
Common Game AI Techniques A* Pathfinding Command Hierarchy Dead Reckoning Emergent Behavior Flocking Formations Influence Mapping …
78
A* Pathfinding Directed search algorithm used for finding an optimal path through the game world Used knowledge about the destination to direct the search A* is regarded as the best Guaranteed to find a path if one exists Will find the optimal path Very efficient and fast
79
Command Hierarchy Strategy for dealing with decisions at different levels From the general down to the foot soldier Modeled after military hierarchies General directs high-level strategy Foot soldier concentrates on combat
80
US Military Chain of Command
81
Dead Reckoning Method for predicting object ’ s future position based on current position, velocity and acceleration Works well since movement is generally close to a straight line over short time periods Can also give guidance to how far object could have moved Example: shooting game to estimate the leading distance
82
Emergent Behavior Behavior that wasn ’ t explicitly programmed Emerges from the interaction of simpler behaviors or rules Rules: seek food, avoid walls Can result in unanticipated individual or group behavior
83
Flocking Example of emergent behavior Simulates flocking birds, schooling fish Developed by Craig Reynolds 1987 SIGGRAPH paper Three classic rules 1. Separation – avoid local flockmates 2. Alignment – steer toward average heading 3. Cohesion – steer toward average position
84
Formations Group movement technique Mimics military formations Similar to flocking, but actually distinct Each unit guided toward formation position Flocking doesn ’ t dictate goal positions
85
Flocking/Formation
86
Influence Mapping Method for viewing/abstracting distribution of power within game world Typically 2D grid superimposed on land Unit influence is summed into each grid cell Unit influences neighboring cells with falloff Facilitates decisions Can identify the “ front ” of the battle Can identify unguarded areas Plan attacks Sim-city: influence of police around the city
87
Mapping Example
88
Level-of-Detail AI Optimization technique like graphical LOD Only perform AI computations if player will notice For example Only compute detailed paths for visible agents Off-screen agents don ’ t think as often
89
Manager Task Assignment Manager organizes cooperation between agents Manager may be invisible in game Avoids complicated negotiation and communication between agents Manager identifies important tasks and assigns them to agents For example, a coach in an AI football team
90
Obstacle Avoidance Paths generated from pathfinding algorithm consider only static terrain, not moving obstacles Given a path, agent must still avoid moving obstacles Requires trajectory prediction Requires various steering behaviors
91
Scripting Scripting specifies game data or logic outside of the game ’ s source language Scripting influence spectrum Level 0: Everything hardcoded Level 1: Data in files specify stats/locations Level 2: Scripted cut-scenes (non-interactive) Level 3: Lightweight logic, like trigger system Level 4: Heavy logic in scripts Level 5: Everything coded in scripts
92
Scripting Pros and Cons Pros Scripts changed without recompiling game Designers empowered Players can tinker with scripts Cons More difficult to debug Nonprogrammers required to program Time commitment for tools
93
State Machine Most common game AI software pattern Set of states and transitions, with only one state active at a time Easy to program, debug, understand
94
Stack-Based State Machine Also referred to as push-down automata Remembers past states Allows for diversions, later returning to previous behaviors
95
Subsumption Architecture Popularized by the work of Rodney Brooks Separates behaviors into concurrently running finite-state machines Well suited for character-based games where moving and sensing co-exist Lower layers Rudimentary behaviors (like obstacle avoidance) Higher layers Goal determination and goal seeking Lower layers have priority System stays robust
96
Terrain Analysis Analyzes world terrain to identify strategic locations Identify Resources Choke points Ambush points Sniper points Cover points
97
Trigger System Highly specialized scripting system Uses if/then rules If condition, then response Simple for designers/players to understand and create More robust than general scripting Tool development simpler than general scripting
98
Promising AI Techniques Show potential for future Generally not used for games May not be well known May be hard to understand May have limited use May require too much development time May require too many resources
99
Bayesian Networks Performs humanlike reasoning when faced with uncertainty Potential for modeling what an AI should know about the player Alternative to cheating RTS Example AI can infer existence or nonexistence of player build units
100
Example
101
Blackboard Architecture Complex problem is posted on a shared communication space Agents propose solutions Solutions scored and selected Continues until problem is solved Alternatively, use concept to facilitate communication and cooperation
102
Decision Tree Learning Constructs a decision tree based on observed measurements from game world Best known game use: Black & White Creature would learn and form “ opinions ” Learned what to eat in the world based on feedback from the player and world
103
Filtered Randomness Filters randomness so that it appears random to players over short term Removes undesirable events Like coin coming up heads 8 times in a row Statistical randomness is largely preserved without gross peculiarities Example: In an FPS, opponents should randomly spawn from different locations (and never spawn from the same location more than 2 times in a row).
104
Genetic Algorithms Technique for search and optimization that uses evolutionary principles Good at finding a solution in complex or poorly understood search spaces Typically done offline before game ships Example: Game may have many settings for the AI, but interaction between settings makes it hard to find an optimal combination
105
Flowchat
106
N-Gram Statistical Prediction Technique to predict next value in a sequence In the sequence 18181810181, it would predict 8 as being the next value Example In street fighting game, player just did Low Kick followed by Low Punch Predict their next move and expect it
107
Neural Networks Complex non-linear functions that relate one or more inputs to an output Must be trained with numerous examples Training is computationally expensive making them unsuited for in-game learning Training can take place before game ships Once fixed, extremely cheap to compute
108
Example
109
Planning Planning is a search to find a series of actions that change the current world state into a desired world state Increasingly desirable as game worlds become more rich and complex Requires Good planning algorithm Good world representation Appropriate set of actions
110
Player Modeling Build a profile of the player ’ s behavior Continuously refine during gameplay Accumulate statistics and events Player model then used to adapt the AI Make the game easier: player is not good at handling some weapons, then avoid Make the game harder: player is not good at handling some weapons, exploit this weakness
111
Production (Expert) Systems Formal rule-based system Database of rules Database of facts Inference engine to decide which rules trigger – resolves conflicts between rules Example Soar used experiment with Quake 2 bots Upwards of 800 rules for competent opponent
112
Reinforcement Learning Machine learning technique Discovers solutions through trial and error Must reward and punish at appropriate times Can solve difficult or complex problems like physical control problems Useful when AI ’ s effects are uncertain or delayed
113
Reputation System Models player ’ s reputation within the game world Agents learn new facts by watching player or from gossip from other agents Based on what an agent knows Might be friendly toward player Might be hostile toward player Affords new gameplay opportunities “ Play nice OR make sure there are no witnesses ”
114
Smart Terrain Put intelligence into inanimate objects Agent asks object how to use it: how to open the door, how to set clock, etc Agents can use objects for which they weren ’ t originally programmed for Allows for expansion packs or user created objects, like in The Sims Enlightened by Affordance Theory Objects by their very design afford a very specific type of interaction
115
Speech Recognition Players can speak into microphone to control some aspect of gameplay Limited recognition means only simple commands possible Problems with different accents, different genders, different ages (child vs adult)
116
Text-to-Speech Turns ordinary text into synthesized speech Cheaper than hiring voice actors Quality of speech is still a problem Not particularly natural sounding Intonation problems Algorithms not good at “ voice acting ” : the mouth needs to be animated based on the text Large disc capacities make recording human voices not that big a problem No need to resort to worse sounding solution
117
Promising AI Techniques: Weakness Modification Learning General strategy to keep the AI from losing to the player in the same way every time Two main steps 1. Record a key gameplay state that precedes a failure 2. Recognize that state in the future and change something about the AI behavior AI might not win more often or act more intelligently, but won ’ t lose in the same way every time Keeps “ history from repeating itself ”
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.