5The Halo ApproachAIs are given a “playground”, within which they are allowed to do whatever they want.The designer defines the flow of battle by moving the AI from one playground to another.The designer’s time is preciousRelatively little spatial information is explicitly entered by the designers.
6Problems Solved in Halo2 Static PathfindingNavigation mesh (ground)Waypoint network (airborne)Raw pathfindingPath-smoothingHint integration (jumping, hoisting, climbing)Static scenery-based hintsStatic scenery carved out of environment meshStatic feature extractionLedges and wall-basesThresholdsCornersLocal environment classificationObject featuresInherent properties (size, mass)Oriented spatial featuresObject behaviors (mount-to-uncover, destroy cover)Dynamic PathfindingPerturbation of path by dynamic obstacles“Meta-search” / Thresholds / Error stagesObstacle-traversal behaviorsVaulting, hoisting, leaping, mounting, smashing, destroyingPath-followingSteering on foot (with exotic movement modes)Steering a vehicle (e.g. ghost, warthog, banshee)Interaction with behaviorWhat does behavior need to know about the way its requests are being implemented?How can pathfinding impact behavior?Body configurationFlying, landing, perchingCornering, bunkering, peekingSpatial analysisFiring position selectionDestination evaluation based on line-of-sight, range-to-target, etc.“Local spatial behaviors”Line-tracing (e.g. for diving off cliffs)Not facing into wallsCrouch in front of each otherDon’t walk into the player’s line of fireCuring isolationDetecting blocked shotsReference framesThe viral nature of the reference frameCognitive model / Object persistenceHonest perceptionSimple partial awareness modelSearchSimple by designGroup searchSpatial conceptualizationDESIGNER-PROVIDEDZones, Areas (areas), Firing positions (locations)This is all the shit that’s always going on.Or better yet, all the shit we need to get right in order for an AI to appear spatially competent
7Problems Solved in Halo2 Environment representationObject representationSpatial RelationsSpatial Behaviors
8Environment Representation How do we represent the environment to the AI?An important constraint: as few restrictions as possible on the form the geometry can takeThe environment artist’s time (and artistic freedom) is preciousIn halo1, the spatial rep = the physics repThis was efficient storage-wise, but limiting in terms of representational power
9Environment Representation Halo2: navigation mesh constructed from the raw environment geometryCSG “stitching in” of static sceneryOptimization“sectors”: convex, polygonal, but not planar~10 months out of my mid-20s that I will never get back.In halo1, the spatial rep = the physics repThis was efficient storage-wise, but limiting in terms of representational power
10Spatial Feature Extraction A lot of features we’re interested in can be extracted automatically …Surface categorization / characterizationSurface connectivityOverhang detectionInterior/exterior surfacesLedgesWall-bases“Leanable” wallsCorners“Step” sectorsThresholdsLocal environment classificationCaptures the “openness” of the environment at firing positions
11Spatial Feature Extraction … and a lot can’t. So we make the designers do it.Designer “hints”:JumpingClimbingHoisting“Wells”Manual fix-up for when the automatic processes fail:Cookie-cuttersConnectivity hints
12Place But that’s not enough. The navigation graph is good for metric queries (e.g. would I run into a wall if I were to move 10 feet in this direction?)… but not a good representation for reasoning about spaceNot enough in terms of environment representation… because the sector that is geometrically convenient is not necessarily conceptually convenient.What kind of reasoning? ANY kind of decision about WHERE TO GO (instead of where I am) in some way needs to deal with a meaningful conceptualization of space which the sector probably is not.
13From http://www.brainconnection.com PlacePsychologists talk about cognitive maps as the internal representation of behaviorally-relevant places and how they relate.A couple of interesting properties:Not metricFuzzyHierarchically organizedUseful for:Landmark navigationDead-reckoningPlace-learningSelf-localizationFrom
14PlaceIn the ideal world, we would be able to automatically construct some kind of spatial semantic networkThis is one area where the Halo representations are notably impovrished. We don’t recognize separate rooms – mostly because the geometry rarely breaks down so neatly.And of course there could be other game-relevant spatial concepts stitched into this representation as well: such as “the way forward”.
15Zones Areas Positions PlaceThe Halo place representation:Locations are inherently AREA-representations. But areas are hard to represent, so we do it in terms of collections of discrete pointsSo they qualify:Not metricFuzzyHierarchicalA shallow hierarchy of spatial groupings:Zones Areas Positions
16Place Zone Area Positions Organizational Attached to a bsp Defines the playgroundFollowingPositionsTactical analysis
17PlaceBut we lose something from taking a designer-authored approach to place:No relational informationA LOT of work for the designers to enterVery little semantic informationA rudimentary example: derived “local environment classification”: open, partially constrained or constrainedThe Designer has to do it.
18PlaceNote, in any case, the dichotomy between our cognitive map and our navigation mesh:Navigation mesh is continuous, metricCognitive map is discrete, relational??? Accuracy “Meaning” ???Whereas humans are exactly the opposite – we have metric (the physical location I’m at) to location (the place I’m at) but not the other way around.Probably because all of our spatial calculation happens in that cognitive map, rather than in the metric space which is so readily available to us in a virtual setting.
19Object Representation How do we represent objects in a useful way to the AI?How do psychologists think about this?Object-based visionShape perception / categorizationContoursAxes of symmetryConvex partsReaching / graspingTight loop between vision and prehensionPrehension != RecognitionEvidence suggests a tight linkage between visual shape perception and prehension (the shaping of the hand to grasp, the shaping reflecting size and orientation, etc)Assume that static objects are going to “disappear into the environment representation”.
20Dynamic Object Representation Three ways to see an object:Inherent propertiesVolumeSpatial features(This is of course in addition to the usual render, collision and physics models)Remember Grandfather Minsky: multiple representations for the same thing, because different representations are useful for different kinds of problemsSizeLeap-speedDestructibleCustom behavior XAssume that static objects become part of the environment in almost all ways
21Volume Rough approximation using pathfinding spheres Spheres projected to AI’s ground-plane at pathfinding time (to become pathfinding discs)A perturbation of the smoothed path
22Spatial Object Features Much like “Affordances” (Gibson, Norman, Wright)An object advertises the things that can be done with / to itBut they must do so in a geometrically precise way in order to be usefulImplementation: “object markers”Rails or pointsOrientation vector indicates when the affordance is activeAn object has different properties at different orientations
23Object Representation Volume + Features = How the AI understands shapeAdding rich AI information becomes a fundamental part of the modeling of the object (just like authoring collision and physics models)Used forExplicit behaviorCornering (corner feature)Mount-to-uncover (mount feature)Destroy cover (destructible property)Pathfinding obstacle-traversalVault (vault feature)Mount (mount feature)Smash (size property)Destroy obstacle (destructible property)And maybe this kind of decomposition of spatial/object features is something that pscyhology could think about?
24Spatial RelationsHow do the objects in the AI’s knowledge model relate to each other spatially?Well first of all, what’s IN the knowledge model?In Halo2:Potential targets (enemies)Player(s)VehiclesDead bodiesAnd that’s it.One of the biggest holes in our knowledge model right now.We have a list of objects.Lots of information about how those objects relate to us.But very little information about how those objects relate to EACH OTHER (which as the psychologists will tell you, is a big part of understanding our world).
25Spatial Relations What the KR people think… One of the biggest holes in our knowledge model right now.We have a list of objects.Lots of information about how those objects relate to us.But very little information about how those objects relate to EACH OTHER (which as the psychologists will tell you, is a big part of understanding our world).From Papadias et. al Acquiring, Representing and Processing Spatial Relations, Proceedings of the 6th International Symposium on Spatial Data Handling, Edinburgh, 1994
26Spatial Relations Some rudimentary Halo2 examples: Grenade-throwing Find clusters of nearby enemiesBlocked shotsRecognize “I can see my target, and I wanted my bullets to go X meters, but they only went 0.6X meters. I must be blocked.”Destroy-coverRecognize that my target is behind destructible coverMount-to-uncoverRecognize that my target is behind a mountable objectBehind: my line of sight test was blocked by a given object, with X distance of my target, and closer to my target than to me.
27Behind the Space Crate Halo2 The notion of “behind” could happen at multiple levelsHalo2Designing the notion of “behind”
28Behind the Space CrateThe notion of “behind” could happen at multiple levelsDesigning the notion of “behind”
29Behind the Space Crate All of which is just to say: “Behind” is not an entirely trivial conceptThe collection of spatial-relation information and the management of their representation structures are not trivial either!
30Spatial Groupings E.g.: Clusters of enemies Battle fronts Battle vectorsIn Halo2: perform dynamic clumping of nearby allies, for :Joint behaviorCall-response combat dialogueShared perceptionBUT, not a perceptual construct!Sort of a dynamic K-means approachThis is the pattern recognition approach
31Spatial Groupings Cognitive Efficiency One, two, many Give groupings first-class representation in the AI’s kowledge model?Another hierarchySee the many as oneOr, instantiate individuals as necessaryPiraha – Maici River, Lowland Amazonia region, BrazilRepresenting “a group” should SAVE ME from having to represent individuals (rather than be an aggregation of my individual representations)Again a very rudimentary example: AIs only perceive swarms as a single entity. But that’s mostly because that’s how the AI itself deals with them.
32Reference Frames What the psychologist would say: Egocentric: viewer-relativeAllocentric: world-relativeInstrinsic: object-relative
33Reference Frames The most interesting use: frames of motion E.g. AIs running around on the back of the giant scarab tank
34Reference Frames The hard part: Moving sectors Adapting A* A* in local space except across ref-frame boundariesFinal path cached in local space(s)A new point representation:(x,y,z,f)Of course you have to overload the usual point/vector operations in order to intelligently handle these “ai-points”, but once you do that, the world is your oyster.
35Reference FramesOnce we start using it one place, we have to use it everywhere!SectorsFiring-positionsScripting pointsTarget locationsLast-seen-locationBurst targetsEtc.Results in a generalized “understanding” of reference framesAnd like all good things AI, a lot of cool stuff falls out more or less for free.
36Spatial behavior Two types: Allocentric Egocentric Generally uses the cognitive mapTypically recognized BehaviorsE.g. fight, follow, searchEgocentricGenerally through local spatial queriesThings that should just sort of, you know, happen
37Fighting Position evaluation based on Range-to-target Line-of-sight to targetDistance from current positionDistance to the player and other alliesEasy!This is the tactical spatial analysis problem.And there are lots of published solutions out there. See in particular Van Der Sterren, Killzone’s AI: Dynamic Procedural Combat Tactics, GDC 2005
38Following Easy to do mediocrely Hard and complicated to do well Stay closeNot too closeTry and stay in front (so that player can see and appreciate) but don’t get in the way and don’t block the player’s line of fireWhat does “in front” even mean?Don’t follow when not appropriateIn the ideal case, need player-telepathyLook for explanation for the player’s movement, then determine whether that explanation warrants MY adjusting my position as well.Is the player traveling somewhere, or is he/she just leaving the the battle front for a few seconds to look for ammo? Is the room he/she is entering a path to somewhere else or a dead-end?In Halo2, you needed to get quite far away from your collected allies before they registered that you were moving, so they always seemed lazy or slow. Whereas they really could have used a sense of “we’re finished in this area, and the only explanation for the players movement in that direction is that he’s moving on to the next area”
39Search Halo2 My thesis The most interesting of the spatial behaviors As complicated as you want to get:Fake it completelyPlay a “look around and shrug” animationPretend you don’t know where the player is while exclaiming “Where’d he go?!”Simple scripted search routinesBasic stateless hidden location-uncoveringProbabilistic location models based on spatial structure…Particle filters, occupancy mapsFor both, negative perception is extremely important (the places that the target is observed NOT to be at)Both can be used for all kinds of neat things, not just search… based on spatial structure and spatial semantics …… based on spatial structure and semantics and player modelThe more complicated the search model, the more complicated the perception and knowledge models and the maps needed to support it.Halo2My thesis
40Egocentric Behaviors The grab-bag: When stopped, don’t face into walls Don’t pick a spot that blocks a friend’s line of fireDon’t block the player’s line of fire everDon’t even cross the player’s line of fire.Crouch down when someone behind me is shootingMove with my allies, rather than treating them as obstaclesGet off non-pathfindable surfacesThese are hard, because they’re not exclusive behaviorsThings to “keep in mind”.Which means that high-level behaviors always need to be robust to their effects.Suspect that we need more mid-level spatial concepts for high and low-level behavior to interact throughButcher: basic training behavior??? The stuff that has to become second nature.
41A Pattern Emerges… Here is a typical course of events Designer says, “we want X to happen”X is implemented as a behaviorIn the course of implementation, a useful representation is invented.The representation, it is realized, is emminently generalizableThe representation, and its management, is pulled out of the behavior and made a general competence, available to all behaviorsThe important point: these representations are driven by needClumpsBlocking objects
42Unsolved Mysteries Group movement “Configuration analysis” QueuingFormations“Configuration analysis”My relation with my alliesAnticipationSpatial SemanticsRooms and doorwaysInside / outsideUnderstanding more environmental spatial features
43The Grand Question(s) Redux Is there a convergence between their explanatory models and our control models?Our models have no respect for or interest in what is happening in the natural mind …… but there is something satisfyingly Darwinian about the way that some useful mental representations survive while others are culled…… and it would be thrilling if it turned out we were working on the same problem from the opposite direction.Not so much “what constitutes spatial competence” but “does our notion of spatial competence have anything to do with that of cog sci?”
44Questions? Damián Isla firstname.lastname@example.org www.ai-blog.net BUNGIE IS HIRING