Presentation is loading. Please wait.

Presentation is loading. Please wait.

Dude, where’s my Warthog?

Similar presentations


Presentation on theme: "Dude, where’s my Warthog?"— Presentation transcript:

1 Dude, where’s my Warthog?
From Pathfinding to General Spatial Competence Damián Isla Bungie Studios What is general spatial competence?

2 What this talk is about Halo2: What we did. What we didn’t. What worked. What you never see but we assure you is there. How does the work we do parallel that done in other fields? Where we go from here.

3 What constitutes general spatial competence?
The Grand Question What constitutes general spatial competence? Lots of things. Also, depends on who you ask.

4 Sources Psychology Ethology Cognitive neuroscience AI Robotics
Cognitive: Spatial vision, reference frames, spatial relations, cognitive maps Developmental: object permanence / search Design: Affordances Ethology Place-learning Landmark navigation vs. dead-reckoning Cognitive neuroscience Neural Maps Hippocampal place-cells Reaching / grasping AI Pathfinding Semantic networks Robotics Spatial feature extraction Self-localization Map-learning Game/Embodied AI Practical, robust whole-brain implementations

5 The Halo Approach AIs are given a “playground”, within which they are allowed to do whatever they want. The designer defines the flow of battle by moving the AI from one playground to another. The designer’s time is precious Relatively little spatial information is explicitly entered by the designers.

6 Problems Solved in Halo2
Static Pathfinding Navigation mesh (ground) Waypoint network (airborne) Raw pathfinding Path-smoothing Hint integration (jumping, hoisting, climbing) Static scenery-based hints Static scenery carved out of environment mesh Static feature extraction Ledges and wall-bases Thresholds Corners Local environment classification Object features Inherent properties (size, mass) Oriented spatial features Object behaviors (mount-to-uncover, destroy cover) Dynamic Pathfinding Perturbation of path by dynamic obstacles “Meta-search” / Thresholds / Error stages Obstacle-traversal behaviors Vaulting, hoisting, leaping, mounting, smashing, destroying Path-following Steering on foot (with exotic movement modes) Steering a vehicle (e.g. ghost, warthog, banshee) Interaction with behavior What does behavior need to know about the way its requests are being implemented? How can pathfinding impact behavior? Body configuration Flying, landing, perching Cornering, bunkering, peeking Spatial analysis Firing position selection Destination evaluation based on line-of-sight, range-to-target, etc. “Local spatial behaviors” Line-tracing (e.g. for diving off cliffs) Not facing into walls Crouch in front of each other Don’t walk into the player’s line of fire Curing isolation Detecting blocked shots Reference frames The viral nature of the reference frame Cognitive model / Object persistence Honest perception Simple partial awareness model Search Simple by design Group search Spatial conceptualization DESIGNER-PROVIDED Zones, Areas (areas), Firing positions (locations) This is all the shit that’s always going on. Or better yet, all the shit we need to get right in order for an AI to appear spatially competent

7 Problems Solved in Halo2
Environment representation Object representation Spatial Relations Spatial Behaviors

8 Environment Representation
How do we represent the environment to the AI? An important constraint: as few restrictions as possible on the form the geometry can take The environment artist’s time (and artistic freedom) is precious In halo1, the spatial rep = the physics rep This was efficient storage-wise, but limiting in terms of representational power

9 Environment Representation
Halo2: navigation mesh constructed from the raw environment geometry CSG “stitching in” of static scenery Optimization “sectors”: convex, polygonal, but not planar ~10 months out of my mid-20s that I will never get back. In halo1, the spatial rep = the physics rep This was efficient storage-wise, but limiting in terms of representational power

10 Spatial Feature Extraction
A lot of features we’re interested in can be extracted automatically … Surface categorization / characterization Surface connectivity Overhang detection Interior/exterior surfaces Ledges Wall-bases “Leanable” walls Corners “Step” sectors Thresholds Local environment classification Captures the “openness” of the environment at firing positions

11 Spatial Feature Extraction
… and a lot can’t. So we make the designers do it. Designer “hints”: Jumping Climbing Hoisting “Wells” Manual fix-up for when the automatic processes fail: Cookie-cutters Connectivity hints

12 Place But that’s not enough.
The navigation graph is good for metric queries (e.g. would I run into a wall if I were to move 10 feet in this direction?) … but not a good representation for reasoning about space Not enough in terms of environment representation … because the sector that is geometrically convenient is not necessarily conceptually convenient. What kind of reasoning? ANY kind of decision about WHERE TO GO (instead of where I am) in some way needs to deal with a meaningful conceptualization of space which the sector probably is not.

13 From http://www.brainconnection.com
Place Psychologists talk about cognitive maps as the internal representation of behaviorally-relevant places and how they relate. A couple of interesting properties: Not metric Fuzzy Hierarchically organized Useful for: Landmark navigation Dead-reckoning Place-learning Self-localization From

14 Place In the ideal world, we would be able to automatically construct some kind of spatial semantic network This is one area where the Halo representations are notably impovrished. We don’t recognize separate rooms – mostly because the geometry rarely breaks down so neatly. And of course there could be other game-relevant spatial concepts stitched into this representation as well: such as “the way forward”.

15 Zones  Areas  Positions
Place The Halo place representation: Locations are inherently AREA-representations. But areas are hard to represent, so we do it in terms of collections of discrete points So they qualify: Not metric Fuzzy Hierarchical A shallow hierarchy of spatial groupings: Zones  Areas  Positions

16 Place Zone Area Positions Organizational Attached to a bsp
Defines the playground Following Positions Tactical analysis

17 Place But we lose something from taking a designer-authored approach to place: No relational information A LOT of work for the designers to enter Very little semantic information A rudimentary example: derived “local environment classification”: open, partially constrained or constrained The Designer has to do it.

18 Place Note, in any case, the dichotomy between our cognitive map and our navigation mesh: Navigation mesh is continuous, metric Cognitive map is discrete, relational ??? Accuracy   “Meaning” ??? Whereas humans are exactly the opposite – we have metric (the physical location I’m at) to location (the place I’m at) but not the other way around. Probably because all of our spatial calculation happens in that cognitive map, rather than in the metric space which is so readily available to us in a virtual setting.

19 Object Representation
How do we represent objects in a useful way to the AI? How do psychologists think about this? Object-based vision Shape perception / categorization Contours Axes of symmetry Convex parts Reaching / grasping Tight loop between vision and prehension Prehension != Recognition Evidence suggests a tight linkage between visual shape perception and prehension (the shaping of the hand to grasp, the shaping reflecting size and orientation, etc) Assume that static objects are going to “disappear into the environment representation”.

20 Dynamic Object Representation
Three ways to see an object: Inherent properties Volume Spatial features (This is of course in addition to the usual render, collision and physics models) Remember Grandfather Minsky: multiple representations for the same thing, because different representations are useful for different kinds of problems Size Leap-speed Destructible Custom behavior X Assume that static objects become part of the environment in almost all ways

21 Volume Rough approximation using pathfinding spheres
Spheres projected to AI’s ground-plane at pathfinding time (to become pathfinding discs) A perturbation of the smoothed path

22 Spatial Object Features
Much like “Affordances” (Gibson, Norman, Wright) An object advertises the things that can be done with / to it But they must do so in a geometrically precise way in order to be useful Implementation: “object markers” Rails or points Orientation vector indicates when the affordance is active An object has different properties at different orientations

23 Object Representation
Volume + Features = How the AI understands shape Adding rich AI information becomes a fundamental part of the modeling of the object (just like authoring collision and physics models) Used for Explicit behavior Cornering (corner feature) Mount-to-uncover (mount feature) Destroy cover (destructible property) Pathfinding obstacle-traversal Vault (vault feature) Mount (mount feature) Smash (size property) Destroy obstacle (destructible property) And maybe this kind of decomposition of spatial/object features is something that pscyhology could think about?

24 Spatial Relations How do the objects in the AI’s knowledge model relate to each other spatially? Well first of all, what’s IN the knowledge model? In Halo2: Potential targets (enemies) Player(s) Vehicles Dead bodies And that’s it. One of the biggest holes in our knowledge model right now. We have a list of objects. Lots of information about how those objects relate to us. But very little information about how those objects relate to EACH OTHER (which as the psychologists will tell you, is a big part of understanding our world).

25 Spatial Relations What the KR people think…
One of the biggest holes in our knowledge model right now. We have a list of objects. Lots of information about how those objects relate to us. But very little information about how those objects relate to EACH OTHER (which as the psychologists will tell you, is a big part of understanding our world). From Papadias et. al Acquiring, Representing and Processing Spatial Relations, Proceedings of the 6th International Symposium on Spatial Data Handling, Edinburgh, 1994

26 Spatial Relations Some rudimentary Halo2 examples: Grenade-throwing
Find clusters of nearby enemies Blocked shots Recognize “I can see my target, and I wanted my bullets to go X meters, but they only went 0.6X meters. I must be blocked.” Destroy-cover Recognize that my target is behind destructible cover Mount-to-uncover Recognize that my target is behind a mountable object Behind: my line of sight test was blocked by a given object, with X distance of my target, and closer to my target than to me.

27 Behind the Space Crate Halo2
The notion of “behind” could happen at multiple levels Halo2 Designing the notion of “behind”

28 Behind the Space Crate The notion of “behind” could happen at multiple levels Designing the notion of “behind”

29 Behind the Space Crate All of which is just to say:
“Behind” is not an entirely trivial concept The collection of spatial-relation information and the management of their representation structures are not trivial either!

30 Spatial Groupings E.g.: Clusters of enemies Battle fronts
Battle vectors In Halo2: perform dynamic clumping of nearby allies, for : Joint behavior Call-response combat dialogue Shared perception BUT, not a perceptual construct! Sort of a dynamic K-means approach This is the pattern recognition approach

31 Spatial Groupings Cognitive Efficiency One, two, many
Give groupings first-class representation in the AI’s kowledge model? Another hierarchy See the many as one Or, instantiate individuals as necessary Piraha – Maici River, Lowland Amazonia region, Brazil Representing “a group” should SAVE ME from having to represent individuals (rather than be an aggregation of my individual representations) Again a very rudimentary example: AIs only perceive swarms as a single entity. But that’s mostly because that’s how the AI itself deals with them.

32 Reference Frames What the psychologist would say:
Egocentric: viewer-relative Allocentric: world-relative Instrinsic: object-relative

33 Reference Frames The most interesting use: frames of motion
E.g. AIs running around on the back of the giant scarab tank

34 Reference Frames The hard part: Moving sectors Adapting A*
A* in local space except across ref-frame boundaries Final path cached in local space(s) A new point representation: (x,y,z,f) Of course you have to overload the usual point/vector operations in order to intelligently handle these “ai-points”, but once you do that, the world is your oyster.

35 Reference Frames Once we start using it one place, we have to use it everywhere! Sectors Firing-positions Scripting points Target locations Last-seen-location Burst targets Etc. Results in a generalized “understanding” of reference frames And like all good things AI, a lot of cool stuff falls out more or less for free.

36 Spatial behavior Two types: Allocentric Egocentric
Generally uses the cognitive map Typically recognized Behaviors E.g. fight, follow, search Egocentric Generally through local spatial queries Things that should just sort of, you know, happen

37 Fighting Position evaluation based on Range-to-target
Line-of-sight to target Distance from current position Distance to the player and other allies Easy! This is the tactical spatial analysis problem. And there are lots of published solutions out there. See in particular Van Der Sterren, Killzone’s AI: Dynamic Procedural Combat Tactics, GDC 2005

38 Following Easy to do mediocrely Hard and complicated to do well
Stay close Not too close Try and stay in front (so that player can see and appreciate) but don’t get in the way and don’t block the player’s line of fire What does “in front” even mean? Don’t follow when not appropriate In the ideal case, need player-telepathy Look for explanation for the player’s movement, then determine whether that explanation warrants MY adjusting my position as well. Is the player traveling somewhere, or is he/she just leaving the the battle front for a few seconds to look for ammo? Is the room he/she is entering a path to somewhere else or a dead-end? In Halo2, you needed to get quite far away from your collected allies before they registered that you were moving, so they always seemed lazy or slow. Whereas they really could have used a sense of “we’re finished in this area, and the only explanation for the players movement in that direction is that he’s moving on to the next area”

39 Search Halo2 My thesis The most interesting of the spatial behaviors
As complicated as you want to get: Fake it completely Play a “look around and shrug” animation Pretend you don’t know where the player is while exclaiming “Where’d he go?!” Simple scripted search routines Basic stateless hidden location-uncovering Probabilistic location models based on spatial structure… Particle filters, occupancy maps For both, negative perception is extremely important (the places that the target is observed NOT to be at) Both can be used for all kinds of neat things, not just search … based on spatial structure and spatial semantics … … based on spatial structure and semantics and player model The more complicated the search model, the more complicated the perception and knowledge models and the maps needed to support it. Halo2 My thesis

40 Egocentric Behaviors The grab-bag: When stopped, don’t face into walls
Don’t pick a spot that blocks a friend’s line of fire Don’t block the player’s line of fire ever Don’t even cross the player’s line of fire. Crouch down when someone behind me is shooting Move with my allies, rather than treating them as obstacles Get off non-pathfindable surfaces These are hard, because they’re not exclusive behaviors Things to “keep in mind”. Which means that high-level behaviors always need to be robust to their effects. Suspect that we need more mid-level spatial concepts for high and low-level behavior to interact through Butcher: basic training behavior??? The stuff that has to become second nature.

41 A Pattern Emerges… Here is a typical course of events
Designer says, “we want X to happen” X is implemented as a behavior In the course of implementation, a useful representation is invented. The representation, it is realized, is emminently generalizable The representation, and its management, is pulled out of the behavior and made a general competence, available to all behaviors The important point: these representations are driven by need Clumps Blocking objects

42 Unsolved Mysteries Group movement “Configuration analysis”
Queuing Formations “Configuration analysis” My relation with my allies Anticipation Spatial Semantics Rooms and doorways Inside / outside Understanding more environmental spatial features

43 The Grand Question(s) Redux
Is there a convergence between their explanatory models and our control models? Our models have no respect for or interest in what is happening in the natural mind … … but there is something satisfyingly Darwinian about the way that some useful mental representations survive while others are culled… … and it would be thrilling if it turned out we were working on the same problem from the opposite direction. Not so much “what constitutes spatial competence” but “does our notion of spatial competence have anything to do with that of cog sci?”

44 Questions? Damián Isla damiani@microsoft.com www.ai-blog.net
BUNGIE IS HIRING


Download ppt "Dude, where’s my Warthog?"

Similar presentations


Ads by Google