Download presentation
Presentation is loading. Please wait.
1
Artificial Intelligence in Games Week 9
Steve Rabin
2
Project 4: Gameplay Improvements
How's progress? Any issues to discuss? Remember Due Date/Time If running on my laptop: WED 6:00PM
3
Quick and Dirty Intro to Neural Networks
4
Neural Networks What are they?
Based on biological neurons in the brain NNs are complex non-linear functions that relate one or more input variables to an output variable NNs are a series of identical non-linear processing elements (analogous to neurons) connected together in a network by weights (analogous to synapses).
5
Neural Networks
6
Neural Networks
7
Neural Networks Must be trained to produce a particular function
Show it examples of inputs and outputs Weights adjusted to minimize error between output and desired output (Backpropagation) Can require hundreds or thousands of examples Computationally intensive For games, training must be done offline Not guaranteed to learn the right thing Requires too many examples Blazingly fast once it is trained and frozen
8
Neural Networks What are neural networks really good at?
Face and fingerprint recognition Handwriting recognition Gesture recognition NNs excel at taking noisy data and processing it to produce a match Game success stories Colin McRae Rally 2.0: Steering racing cars Forza Motorsport "Drivatar" Creatures: Control and learning Bottom line: Statistically, game developers are not using NNs
9
Perceptrons Perceptron network is simpler version of a NN
Single-layer neural network Easier to train than an NN or decision tree Each perceptron outputs a “yes” or “no” Either it gets stimulated enough to trigger, or it does not Can learn Boolean decisions such as attack or don’t attack Drawback: Can only learn simple (linearly separable) functions
10
Perceptrons: Black & White Example
Three inputs used to determine if Creature was hungry hunger, tasty food, unhappiness If Creature ate and received positive or negative feedback, then weights were adjusted, thus facilitating learning
11
Perceptrons: Black & White Example
Inputs Hunger Tasty food Unhappiness
12
Perceptrons: Black & White Example
Desire Sources Actual Intended (Feedback) Hunger Tastiness Unhappiness Value Weight 0.8 0.500 1.000 0.564 0.2 0.1 0.599 0.6 0.573 0.404 0.200 0.597 0.560 1 0.5 0.002 0.3 0.4 0.450 0.853 0.400 - 0.542 0.4047
13
Decision Tree Relates inputs from game world to an output representing something you want to predict Series of rules arranged in a tree structure Example: Will bot survive engagement with player? Is bot’s health low? Yes: will not survive No: check next question Is bot’s ammunition low? No: will survive Important for learning Algorithms exist to create decision trees in near real-time (ID3) Yes Health Low? No Die Yes Ammo Low? No Die Live
14
Decision Tree Learning: Black & White Example
What objects will satisfy a Creature’s desire to eat? Scenario Creature eats something Creature gets either positive or negative feedback Player can stroke or slap Creature Certain objects are tastier than others Decision tree is created that reflects feedback and past experience Decision tree influences future decisions to eat a particular object
15
Decision Tree Learning: Black & White Example
What Creature Ate Feedback (how tasty) A big rock -1.0 A small rock -0.5 -0.4 A tree -0.2 A cow +0.6
16
Decision Tree Learning: Minimizing Entropy
Pick decisions that minimize entropy Random outcomes result in entropy of 1
17
Decision Tree Learning: Anger Decision Tree
What Creature Attacked Feedback from Player Friendly, weak, Celtic -1.0 Enemy, weak, Celtic +0.4 Friendly, strong, Norse Enemy, strong, Norse -0.2 Friendly, weak, Greek Enemy, medium, Greek +0.2 Enemy, strong, Greek -0.4 Enemy, medium, Aztec 0.0 Friendly, weak, Aztec
18
Decision Tree Learning: Anger Decision Tree
19
Decision Tree Learning: Anger Decision Tree
What Creature Attacked Feedback from Player Friendly, weak, Celtic -1.0 Enemy, weak, Celtic +0.4 Friendly, strong, Norse Enemy, strong, Norse -0.2 Friendly, weak, Greek Enemy, medium, Greek +0.2 Enemy, strong, Greek -0.4 Enemy, medium, Aztec 0.0 Friendly, weak, Aztec
20
Decision Tree Learning: Anger Decision Tree
What Creature Attacked Feedback from Player Friendly, weak, Celtic -1.0 Enemy, weak, Celtic +0.4 Friendly, strong, Norse Enemy, strong, Norse -0.2 Friendly, weak, Greek Enemy, medium, Greek +0.2 Enemy, strong, Greek -0.4 Enemy, medium, Aztec 0.0 Friendly, weak, Aztec
21
Decision Tree Demo
22
Genetic Algorithms
23
Genetic Algorithms Based on evolutionary principals Technique
Collection of genes forms a chromosome All combinations of genes defines the search space A chromosome defines a point in the search space Technique Create a population of chromosomes Mate chromosomes to create a new generation Crossover Mutation Elitism GAs outperform many other techniques when search space contains many optima
24
Genetic Algorithms: Crossover
25
Genetic Algorithms Extremely interesting and fascinating technology
What is it used for? Optimization technique to find a solution Most useful when the search space is poorly understood Unfortunately, not very useful for game development Too slow for real-time Not guaranteed to find an optimal solution Can be used during development to discover the best combination of settings If game changes slightly during tuning, GA result is likely invalid
26
Genetic Algorithms Demo Genome encoding:
Steering | Throttle | Steering | Throttle | …
27
Movie Time!
28
Gesture Recognition
29
Gesture Recognition What kind of algorithm can do this?
Generalizing from examples? How? Comparing against examples?
30
Gesture Recognition: Steps
Record gesture Must know when gesture starts and stops! Holding a button or passing a threshold Process Gesture Position independence Size independence Irregularities Redundant data Feature extraction Apply matching algorithm
31
Gesture Recognition: Processing
32
Gesture Recognition: Processing
Normalize size Target length Target segments
33
Gesture Recognition: Nearest Neighbor
Example 1 Example 2 Example 3 Which example best matches the player's??? Player Move
34
Gesture Recognition: Nearest Neighbor
error = 0° error = 90° error = 90° Example 1 Example 2 Example 3
35
Gesture Recognition: Nearest Neighbor
error = 45° error = 45° error = 0° Example 1 Example 2 Example 3
36
Gesture Recognition: Nearest Neighbor
error = 45° error = 0° error = 0° Example 1 Example 2 Example 3
37
Gesture Recognition: Nearest Neighbor
=90 =135 90+0+0=90 Example 1 Example 2 Example 3 Player Move
38
Gesture Recognition: Nearest Neighbor
=4050 =10125 =8100 Example 1 Example 2 Example 3 Square the individual error, then sum!!! Player Move
39
Gesture Recognition Punch'n'Crunch compares the vectors and performs nearest neighbor
40
Microsoft’s Kinect $149.99 at retail
Sold 10M+ so far $56 worth of parts $17 for PrimeSense motion detector $500M ad campaign during Christmas Ad spending was $100 per Kinect sold
42
“Microsoft exec caught in privacy snafu, says Kinect might tailor ads to you”
From Engadget.com: Microsoft's Dennis Durkin voiced an interesting idea at an investment summit last week – the idea that the company's Kinect camera might pass data to advertisers about the way you look, play and speak. "We can cater what content gets presented to you based on who you are," he told investors, suggesting that the Kinect offered business opportunities that weren't possible "in a controller-based world."
43
“Microsoft exec caught in privacy snafu, says Kinect might tailor ads to you”
“And over time that will help us be more targeted about what content choices we present, what advertising we present, how we get better feedback. And data about how many people are in a room when an advertisement is shown, how many people are in a room when a game is being played, how are those people engaged with the game? How are they engaged with a sporting event? Are they standing up? Are they excited? Are they wearing Seahawks jerseys?” – Microsoft's Dennis Durkin
44
“Microsoft exec caught in privacy snafu, says Kinect might tailor ads to you”
From Engadget.com: …but moreover it's explicitly against the privacy policy Microsoft presents Kinect users. "Third party partners use aggregated data to deliver Kinect experiences (games or applications), to understand how customers use their Kinect experiences, and to improve performance or even to help plan new experiences," the Kinect Privacy and Online Safety FAQ reads, but also "They are not permitted to use the information for marketing purposes such as selling you games or services, or for personalizing advertising."
45
Kinect: Depth Camera
46
Kinect: Depth Camera
47
Kinect: Depth Camera
48
Kinect: Your Bones
49
Kinect: Skeleton Patent
“To determine whether a target or object in the scene corresponds to a human target, each of the targets may be flood filled and compared to a pattern of a human body model. Each target or object that matches the human body model may then be scanned to generate a skeletal model associated therewith. The skeletal model may then be provided to the computing environment such that the computing environment may track the skeletal model, render an avatar associated with the skeletal model, and may determine which controls to perform in an application executing on the computing environment based on, for example, gestures of the user that have been recognized from the skeletal model. A gesture recognizer engine, the architecture of which is described more fully below, is used to determine when a particular gesture has been made by the user.”
50
Kinect: Skeleton from a Point Cloud
Recorded hundreds of people moving in front of depth camera Manually hand labeled a skeleton on top of every frame Recorded similar mocap data to movements Machine learning to match noisy data to a skeleton (with probabilities)
51
Kinect: Skeleton
52
Kinect Step 1: (from Popular Mechanics, Jan 2010)
As you stand in front of the camera, it judges the distance to different points on your body. In the image on the far left, the dots show what it sees, a so-called "point cloud" representing a 3-D surface; a skeleton drawn there is simply a rudimentary guess. (The image on the top shows the image perceived by the color camera.)
53
Kinect Step 2: (from Popular Mechanics, Jan 2010)
Then the brain guesses which parts of your body are which. It does this based on all of its experience with body poses. Depending on how similar your pose is to things it's seen before, Natal can be more or less confident of its guesses. In the color-coded person (bottom center), the darkness, lightness, and size of different squares represent how certain Natal is that it knows what body-part that area belongs to.
54
Kinect Step 2 (continued): (from Popular Mechanics, Jan 2010)
(For example, the three large red squares indicate that it’s highly probable that those parts are “left shoulder,” “left elbow” and “left knee"; as the pixels become smaller and muddier in color, such as the grayish pixels around the hands, that’s an indication that Natal is hedging its bets and isn’t very sure of its identity.)
55
Kinect Step 3: (from Popular Mechanics, Jan 2010)
Then, based on the probabilities assigned to different areas, Natal comes up with all possible skeletons that could fit with those body parts. (This step isn't shown in the image, but it looks similar to the stick-figure drawn on the left, except there are dozens of possible skeletons overlaid on each other.) It ultimately settles on the most probable one. Its reasoning here is partly based on its experience, and partly on more formal kinematics models that programmers added in.
56
Kinect Step 4: (from Popular Mechanics, Jan 2010)
Once Natal has determined it has enough certainty about enough body parts to pick the most probable skeletal structure, it outputs that shape to a simplified 3D avatar [image at right]. That’s the final skeleton that will be skinned with clothes, hair, and other features and shown in the game.
57
Kinect Step 4: (from Popular Mechanics, Jan 2010)
Once Natal has determined it has enough certainty about enough body parts to pick the most probable skeletal structure, it outputs that shape to a simplified 3D avatar [image at right]. That’s the final skeleton that will be skinned with clothes, hair, and other features and shown in the game.
58
Kinect Gesture Recognition?
Microsoft doesn’t distribute any code to recognize poses or gestures Maybe because it’s a patent minefield… The Kinect SDK just outputs raw skeleton poses Up to individual developers to figure out how to use raw skeleton data
59
Randomness
60
Randomness What can randomness do for games? Why do we want it?
How do you get randomness? What is pseudo-randomness? (PRNG) How is it implemented? In Microsoft VC7? In Metrowerks CodeWarrior? In SN Systems ProDG?
61
VC7’s rand() function #define RAND_MAX 0x7FFF
void srand(unsigned int seed) { holdrand = (long)seed; } int rand(void) return(((holdrand = holdrand * L + L) >> 16) & 0x7fff);
62
VC7’s rand() function Linear Congruential PRNG Xn+1=(axn + c) mod M
((seed * ) >> 16) mod 0x7fff Period can’t exceed M (0x7fff = 32767) c is relatively prime to M
63
Randomness in Games What should the seed be?
Are there any pure sources of entropy? Sometimes need predictable randomness Why? SimCity How do you get predictable randomness? What happens on cross-platform games?
64
Randomness Four types of randomness Random Chance Random Range
10% chance of direct hit Random Range Play 1 of 8 different idle animations Spawn enemy at 1 of 5 different locations Random Real Random number in range [0,1] Random Gaussian Random number in range [-1,1] with Gaussian distribution
65
Gaussian Randomness (Normal Distribution, 68–95–99.7 Rule)
66
Central Limit Theorem Sum of K uniform random numbers in the range [–1, 1] will approach a Gaussian distribution with mean zero and standard deviation Example Add up three uniform random numbers Results in mean of zero Results in standard deviation of (convenient since mean and standard deviation are identical to a standard normal distribution) But the tails are missing!!! (but we don't care!)
67
Gaussian Randomness Generator (Central Limit Theorem)
double gaussrand(void) { static unsigned long seed = ; double sum = 0; for(int i=0; i<3; i++) { //Uses an xorshift PRNG unsigned long hold = seed; seed^=seed<<13; seed^=seed>>17; seed^=seed<<5; long r = hold+seed; sum += (double)r * (1.0/0x7FFFFFFF); } //Returns [-3.0,3.0] (66.7%–95.8%–100%) return sum;
68
2D Gaussian Randomness (Gaussian randomness at a random angle)
69
Ponder this... Why do many distributions in nature follow a Gaussian distribution (or bell curve), such as human intelligence or the heights of trees? The central limit theorem alludes to the answer. When there are many uniform (or even non-uniform) random variations that contribute to a given property, the distribution of that property becomes more normal (rather than remaining uniform). While this is a gross simplification of most systems in nature, it does shed light on why so many properties and systems roughly display a Gaussian distribution.
70
Randomness Can poor randomness hurt games?
What is poor randomness? Can poor randomness hurt candy? Are the colors of M&M’s inside “fun size” packs random?
71
M&M’s Actual Frequency
72
Randomness Lessons: Do we want true randomness in games?
M&M’s aren’t equally random – why? Small samples of randomness don’t look very random Do we want true randomness in games? When is it a good idea? When is it a bad idea?
73
Randomness Is the rand() function sufficiently random?
What does rand() return? What other options exist? Is rand() random enough for your game? Is rand() too random for your game? Examples 5 enemy spawn points Enemy has 10% chance of direct hit on player Random idle animation
74
Appearance of Randomness
What looks random? Each sequence has 20 numbers of random heads and tails How random does each look?
75
Appearance of Randomness
alternations alternations alternations
76
Appearance of Randomness
What is probability of these penny flip sequences? 5 possible enemy spawn points (1,2,3,4,5) 2, 2, 2, 2, 2 1, 2, 1, 1, 2 10% chance of direct hit hit, hit, hit, hit, miss, miss, miss 50% chance of giving birth to a girl girl, girl, girl, girl, girl, girl, girl, girl
77
Appearance of Randomness
In-Class Exercise #2 Create randomness (the hard way)
78
Can you Beat Roulette? Run this algorithm until rich
1. Bet $1 on black 2. Spin wheel 3. If win, goto #1 4. If lose, double bet on black plus $1, goto #2 Can this work?!?
79
Appearance of Randomness
True randomness doesn’t appear random to humans! Supported by psychology researchers at Stanford and University of Massachusetts Can we filter rand() to get rid of anomalies? Does this destroy the randomness? What anomalies? Too many alternations Too few alternations Repeating motifs ,
80
Filtered Randomness rand() Filtered
Filtered
81
Boolean Chance Flipped next value:
(-) to make it alternate between 12 and 14 times (‘) to prevent a run of more than three values (.) to eliminate a repeating motif of four values, like (^) to eliminate the pattern or ’ ’01. ’11 ^
82
Integer Range How can we filter integers, like:
7, 5, 4, 5, 8, 8, 2, 2, 6, 4, 8, 7, 5, 4, 4, 5 1, 6, 1, 0, 8, 7, 5, 1, 4, 5, 3, 8, 9, 4, 1, 8 8, 9, 5, 3, 0, 9, 4, 2, 5, 6, 6, 8, 3, 4, 0, 0
83
Integer Range 1. () No repeating numbers (optional)
2. [] No excessive repeating in last 10 values 3. {} No increasing/decreasing runs of 3 4. $$ No bottom or top of range for too long 5. ## No two pairs in a row, like 5533 or 2288 6. -- No motifs, like 4545 or 2929 7. ** No 3 digit motifs in last 10, like {5}7-6-9[7]*1*(9)$8$0130(0)$2$582
84
Gaussian Randomness Gaussian randomness float x1, x2, w, y1, y2; do {
Aiming shots, perturbing creature settings, random tree placement float x1, x2, w, y1, y2; do { x1 = 2.0f*rand()*(1.0f/RAND_MAX) - 1.0f; x2 = 2.0f*rand()*(1.0f/RAND_MAX) - 1.0f; w = x1 * x1 + x2 * x2; } while ( w >= 1.0 ); w = (float)sqrt( (-2.0f * log10( w ) ) / w ); //y1 and y2 are the random Gaussian numbers y1 = (x1 * w) / 1.5f; y2 = (x2 * w) / 1.5f;
85
Filtered Randomness: Gaussian Randomness
Aiming shots, perturbing creature settings, random tree placement
86
Filtered Randomness Rules for Gaussian randomness in range [-1,1]
1. Consecutive numbers must differ by more than 0.02 2. Three consecutive numbers must differ by more than 0.1 3. Five consecutive numbers must not be increasing or decreasing 4. No more than three consecutive numbers above or below zero
87
Filtered Randomness Rules for Gaussian randomness in range [-1,1]
1. Consecutive numbers must differ by more than 0.02 2. Three consecutive numbers must differ by more than 0.1 3. Five consecutive numbers must not be increasing or decreasing 4. No more than three consecutive numbers above or below zero
88
Filtered Randomness Quality of randomness ENT benchmarking program
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.