Presentation is loading. Please wait.

Presentation is loading. Please wait.

Advanced Graphics Computer Animation Autonomous Agents Spring 2002 Professor Brogan.

Similar presentations


Presentation on theme: "Advanced Graphics Computer Animation Autonomous Agents Spring 2002 Professor Brogan."— Presentation transcript:

1 Advanced Graphics Computer Animation Autonomous Agents Spring 2002 Professor Brogan

2 Quick Reminder Assignment 1 “take away message” –It’s hard to build intuitive interfaces Adding knots to spline (beginning, middle, end) –Graphics makes it easy to add feedback that lets the user decide how to accomplish tasks Highlight potential objects of an action before execution and change their graphical state once again when an action is initiated

3 Autonomous Agents Particles in simulation are dumb Make them smart –Let them add forces Virtual jet packs –Let them select destinations –Let them select neighbors Give them artificial intelligence

4 History of AI / Autonomous Agents – Cliff Note Version 1950s – Newell, Simon, and McCarthy –General Problem Solver (GPS) Use means-ends analysis Subdivide problem Use transformation functions (actions) to subdivide and solve subtasks –Useful for well-defined tasks Theorem proving, word problems, chess Iteration and exhaustive search used

5 History of AI 1960s – ELIZA, chess, natural language processing, neural networks (birth and death), numerical integration 1970-80s – Expert systems, fuzzy logic, mobile robots, backpropagation networks –Use “massive” storage capabilities of computers to store thousands of “rules” –Continuous (not discrete) inputs –Multi-layer neural networks

6 History of AI 1990s - Major advances in all areas of AI –machine learning –intelligent tutoring –case-based reasoning –multi-agent planning –scheduling –data mining –natural language understanding and translation –vision –games

7 So Many Choices Important factors: f(state, actions)=state_new –# of inputs(state) and outputs(actions) Previous states don’t matter (Markov) Actions are orthogonal –Continuous versus discrete variables –Differentiability of f( ) –Model of f( ) –Costs of actions

8 Example: Path Planning State –Position, velocity, obstacle positions, goal (hunger, mood, motivations), what you’ve tried Actions –Movement (joint torques), get in car, eat, think, select new goal Important factors: f(state, actions)=state_new –# of inputs(state) and outputs(actions) Previous states don’t matter (Markov) Actions are orthogonal –Continuous versus discrete variables –Differentiability of f( ) –Model of f( ) –Costs of actions

9 Path Planning Do previous states matter? –Going in circles Are actions orthogonal? –Satisfy multiple goals with one action Continuous versus discrete? Differentiability of f( ) –If f(state, action1) = state_new_1, does f(state, 1.1 * action1) = 1.1 * state_new_1?

10 Path Planning Model of f( ) –Do you know the result of f(s, a) before you execute it? –Compare path planning in a dark, unknown room to path planning in a room with a map Costs of actions –If many actions take state -> new_state How do you pick just one?

11 Let’s Keep it Simple Make particles that can intelligently navigate through a room with obstacles Each particle has a jet pack –Jet pack can swivel (yaw torque) –Jet pack can propel forward (forward thrust) Previous states don’t matter

12 Particle Navigation State = position, velocity, obstacle positions Action = sequence of n torques and forces Solve for action s.t. f(s, a) = goal position –Minimize sum of torques/forces (absolute value) –We have a model: f=ma –Previous states don’t matter We don’t care how we got to where we are now Tough problem –Lots of torques and forces to compute –Obstacle positions could move and force us to recompute

13 Simplify Particle Navigation State = position, velocity, obstacles Action = torque, force F (s, a) = new position, velocity –Find action s.t. position is closer to goal position Smaller action space Local search – could get caught in local min (box canyon) Adapts to moving obstacles

14 Multiple Particle Path Planning Flocking behavior –Select an action for each particle in flock Avoid collisions with each other Avoid collisions with environment Don’t stray from flock –Craig Reynolds: Flocks, Herds, and Schools: A Distributed Behavioral Model, SIGGRAPH ’87

15 Flocking Action choices One AgentAll Agents One Action Quick, but suboptimal Slower but better coordination All Actions Slower and replanning required Slowest but complete and optimal

16 Models to the Rescue Do you expect your neighbor to behave a certain way? –You have a model of its actions –You can act independently, yet coordinate

17 The Three Rules of Flocking Go the same speed as neighbors (velocity matching) –Minimizes chance of collision Move away from neighbors that are too close Move towards neighbors that are too far away

18 Emergent Behaviors Combination of three flocking rules results in emergence of fluid group movements Emergent behavior –Behaviors that aren’t explicitly programmed into individual agent rules Ants, bees, schooling fishes

19 Local Perception Success of flocking depends on local perception (usually considered a weakness) –Border conditions (like cloth) –Flock splitting

20 Ethological Motivation ethology: the scientific and objective study of animal behavior especially under natural conditions Perception (find neighbors) and action Fish data

21 Combining three rules Averaging desired actions of three rules can be bad –Turn left + Turn right = go straight… Force is allocated to rules according to priority –First collision detection gets all it needs –Then velocity matching –Then flock centering Starvation is possible

22 Action Selection Potential Fields – Collision Avoidance

23 Scaling Particles to Other Systems Silas T. Dog, Bruce Blumberg, MIT AI Lab Many more behaviors and actions –Internal state –Multiple goals –Many ways to move

24 Layering Control Perceive world –Is there food here? Strategize goal(s) –Get food Compute a sequence of actions that will accomplish goal(s) –Must walk around obstacle Convert each action into motor control –Run, gallop, trot around obstacle

25 Multiple Goals Must assign a priority to goals –Can’t eat and sleep at same time Can’t dither back and forth between goals –Don’t start eating until finished sleeping Don’t let goals wither on priority queue –Beware of starvation (literally) Unrelated goals can be overlapped –Eating while resting is possible


Download ppt "Advanced Graphics Computer Animation Autonomous Agents Spring 2002 Professor Brogan."

Similar presentations


Ads by Google