Presentation is loading. Please wait.

Presentation is loading. Please wait.

EXPRESSIVE INTELLIGENCE STUDIO Lecture 10 CS148/248: Interactive Narrative UC Santa Cruz School of Engineering www.soe.ucsc.edu/classes/cmps248/Spring2007.

Similar presentations


Presentation on theme: "EXPRESSIVE INTELLIGENCE STUDIO Lecture 10 CS148/248: Interactive Narrative UC Santa Cruz School of Engineering www.soe.ucsc.edu/classes/cmps248/Spring2007."— Presentation transcript:

1 EXPRESSIVE INTELLIGENCE STUDIO Lecture 10 CS148/248: Interactive Narrative UC Santa Cruz School of Engineering www.soe.ucsc.edu/classes/cmps248/Spring2007 michaelm@cs.ucsc.edu 29 May 2007

2 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Tale-Spin: world and character simulation  Tale-Spin was the first world and character simulation approach to story generation  A story is generated as a consequence of character pursuing plans to accomplish goals  The world simulator automatically infers consequences of actions taken by characters

3 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Example story  Let’s look at an example story generated by micro-talespin, the story of thirsty Irving and stubborn Joe (*story2*)  Lets look at the output for where we see character and world modeling happening

4 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Knowledge used by the simulator  Goals and plans: there is a collection of plans to be used by the characters in accomplishing goals  There are alternate plans for accomplishing the same goal.  Plans have preconditions for when they are appropriate.  Plans can initiate subgoals.  Actions: there is a primitive set of actions known by the simulation  Used conceptual dependency (CD), an ontology of actions used by the NLP and narrative research of Roger Schank’s research group  Characters: characters possess goals (which they look up plans to accomplish) and have a memory of facts they know  Inferences: a collection of rules for inferring the consequences of knowledge and actions  Natural language generation rules

5 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Conceptual dependency (examples)  Atrans – transfer of possessions (object) from one agent to another  Grasp – an agent picks up an object or drops an object  Ingest – an agent eats an object  Mbuild – build new knowledge out of old  Mtrans – transfer knowledge from one agent to another  Propel – to apply physical force to an object  Ptrans – to transfer the physical location of an object

6 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Example plans for getting an object  DCont (GetObject) – to get an object, if you know someone has it, persuade them to give it to you, otherwise try to find where the object is, go there and take it  DCont succeeds trivially if the agent already has the object  DCont-1 (GetObject through persuasion)  Persuade actor owner (atrans owner object actor) actor should persuade owner to have owner transfer object to actor  DCont-2 (GetObject by going where it is and getting it)  DKnow actor (where-is object) actor knows where object is  DProx actor actor object the actor should move themselves near object  DoIt atrans actor object actor actor should atrans object to themselves

7 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Example plan for persuasion  Persuade – you can persuade someone to do something by either asking them, giving them food, or threatening them  Bargain-plan – a plan for agent1 to bargain with an agent2  Precondition: we can only use this plan if agent1 know that it is not the case the agent2 is deceitful towards the agent1, if the agent1 knows the agent2 doesn’t have food, and if agent1 doesn’t have the goal of having food  Mbuild agent1 (cause atrans-food (maybe action)) agent1 stores a fact in its head that it’s hoping that giving food will result in the desired action  Tell agent1 agent2 (question (cause atrans-food (future action))) – agent1 asks agent2 if giving agent2 food will result in agent2 performing the desired action  Dcont agent1 food agent1 has goal to get the food  Dprox agent1 agent1 agent2 agent1 has goal to move itself near agent2  Atrans agent1 food agent2 agent1 gives food to agent2  Test if action is true – did agent2 keep the bargain?

8 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Consequences  Actions have consequence rules associated with them  Example: Atrans-conseq  Everyone in the area notices that the agent performed the atrans  Everyone in the area notices that the receiving agent possesses the object  Everyone in the area notices that the object is physically held by the receiving agent  Everyone in the area notices that the giving agent no longer possesses the object  The Atrans consequences are an example of primitive world and agent “physics” – bookkeeping performed to know who know what

9 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Consequences model social action  Besides bookkeeping, consequences are used to model social action  Promise-conseq – consequences of y asking x to do xdo after y performs ydo  If x is deceitful towards y, then x will tell y they are stupid after y performs the action (establishes a demon) but tells y they will perform xdo after y performs ydo  If x likes y, then x will perform xdo after y performs ydo (sets up a demon) and tells y this  Otherwise x says no (they won’t perform xdo)  Builds the rules of social action into the world  Difficult to have agent-specific respones

10 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Model of storytelling  For Tale-Spin, a story is the result of agents pursuing plans in the face of goals  Let’s compare this with Ryan’s 8 narrative dimensions  Spatial and temporal dimension met easily (individuated existents, significant transformation, non-habitual action)  Mental dimensions are met (some of the participants are intelligent agents who pursue planful activity motivated by goals)  Pragmatic dimensions are a problem  No unified casual chain leading to closure – must carefully set initial conditions to establish this  The story actions are asserted as facts  System doesn’t explicitly reason about meaning of story  How would we “interactivize” Tale-spin?

11 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Hierarchical planning and characters  Cavazza et. al. is a modern incarnation of a character and world-modeling approach to story generation  They employ character-centric hierarchical task planning to a Friends domain  First we need to have some idea of what hierarchical task planning is

12 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Motivation for HTN Planning  We may already have an idea how to go about solving problems in a planning domain  Example: travel to a destination that’s far away:  Domain-independent planner:  many combinations of vehicles and routes  Experienced human: small number of “recipes” e.g., flying: 1. buy ticket from local airport to remote airport 2. travel to local airport 3. fly to remote airport 4. travel to final destination  How to enable planning systems to make use of such recipes?

13 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ HTN Planning travel(UMD, Toulouse) get-ticket(IAD, TLS) travel(UMD, IAD) fly(BWI, Toulouse) travel(TLS, LAAS) get-taxi ride(TLS,Toulouse) pay-driver go-to-Orbitz find-flights(IAD,TLS) buy-ticket(IAD,TLS) get-taxi ride(UMD, IAD) pay-driver Task:  Problem reduction  Tasks (activities) rather than goals  Methods to decompose tasks into subtasks  Enforce constraints  E.g., taxi not good for long distances  Backtrack if necessary Method: taxi-travel(x,y) get-taxi ride(x,y) pay-driver get-ticket(BWI, TLS) go-to-Orbitz find-flights(BWI,TLS) BACKTRACK travel(x,y) Method: air-travel(x,y) travel(a(y),y) get-ticket(a(x),a(y)) travel(x,a(x)) fly(a(x),a(y))

14 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ HTN Planning  HTN planners may be domain-specific  Or they may be domain-configurable  Domain-independent planning engine  Domain description that defines not only the operators, but also the methods  Problem description  domain description, initial state, initial task network Task: Method: taxi-travel(x,y) get-taxi ride(x,y) pay-driver travel(x,y) Method: air-travel(x,y) travel(a(y),y) get-ticket(a(x),a(y)) travel(x,a(x)) fly(a(x),a(y))

15 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Simple Task Network (STN) Planning  A special case of HTN planning  States and operators  The same as in classical planning  Task: an expression of the form t(u 1,…,u n )  t is a task symbol, and each u i is a term  Two kinds of task symbols (and tasks):  primitive: tasks that we know how to execute directly  task symbol is an operator name  nonprimitive: tasks that must be decomposed into subtasks  use methods (next slide)

16 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Methods  Totally ordered method: a 4-tuple m = (name(m), task(m), precond(m), subtasks(m))  name(m): an expression of the form n(x 1,…,x n )  x 1,…,x n are parameters - variable symbols  task(m): a nonprimitive task  precond(m): preconditions (literals)  subtasks(m): a sequence of tasks  t 1, …, t k  air-travel(x,y) task:travel(x,y) precond:long-distance(x,y) subtasks:  buy-ticket(a(x), a(y)), travel(x,a(x)), fly(a(x), a(y)), travel(a(y),y)  travel(x,y) buy-ticket (a(x), a(y))travel (x, a(x))fly (a(x), a(y))travel (a(y), y) long-distance(x,y) air-travel(x,y)

17 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ  Cannot interleave subtasks of different tasks  Sometimes this can make things awkward  Need methods that reason globally instead of locally walk(a,b) pickup(p) walk(b,a) get(p)get(q) get-both(p,q) goto(b) pickup(p)pickup(q) get-both(p,q) Limitation of Ordered-Task Planning pickup-both(p,q) walk(a,b) pickup(q) walk(b,a) walk(a,b) goto(a) walk(b,a)

18 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Generalize the Methods  Generalize methods to allow the subtasks to be partially ordered  Consequence: plans may interleave subtasks of different tasks  This makes the planning algorithm more complicated walk(a,b)pickup(p) get(p) stay-at(b)pickup(q) get(q) get-both(p,q) walk(b,a)stay-at(a)

19 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Applying HTN planning  Domain: methods, operators  Problem: methods, operators, initial state, task list  Solution: any executable plan that can be generated by recursively applying  methods to nonprimitive tasks  operators to primitive tasks nonprimitive task precond method instance s0s0 precondeffectsprecondeffects s1s1 s2s2 primitive task operator instance

20 EXPRESSIVE INTELLIGENCE STUDIOUC SANTA CRUZ Application of HTN to storytelling  They interleave planning and execution  Backtrack when a primitive action fails  This is different than traditional HTN planning that does a full forward search  Use total ordering of task nets  Argue that, since stories are structurally decomposable into unique pieces, don’t need intermixing  But their formalism combines character and story-level distinctions, resulting in characters who can only do one thing at a time  Problem for believability  HTNs, as a knowledge-rich planning formalism, is appropriate for storytelling  Internal nodes (task nets) can implicitly encode “desired world changes” that aren’t explicitly captured in the domain ontology (procedural vs. declarative encoding)


Download ppt "EXPRESSIVE INTELLIGENCE STUDIO Lecture 10 CS148/248: Interactive Narrative UC Santa Cruz School of Engineering www.soe.ucsc.edu/classes/cmps248/Spring2007."

Similar presentations


Ads by Google