Presentation is loading. Please wait.

Presentation is loading. Please wait.

July 24, 20001Smart Animated Agents -- SIGGRAPH Course #24 SMART ANIMATED AGENTS Norman I. Badler -- Course #24 Co-Organizer (with John Funge) Center for.

Similar presentations


Presentation on theme: "July 24, 20001Smart Animated Agents -- SIGGRAPH Course #24 SMART ANIMATED AGENTS Norman I. Badler -- Course #24 Co-Organizer (with John Funge) Center for."— Presentation transcript:

1 July 24, 20001Smart Animated Agents -- SIGGRAPH Course #24 SMART ANIMATED AGENTS Norman I. Badler -- Course #24 Co-Organizer (with John Funge) Center for Human Modeling and Simulation University of Pennsylvania Philadelphia, PA 19104-6389 http://www.cis.upenn.edu/~badler Norman I. Badler -- Course #24 Co-Organizer (with John Funge) Center for Human Modeling and Simulation University of Pennsylvania Philadelphia, PA 19104-6389 http://www.cis.upenn.edu/~badler

2 July 24, 20002Smart Animated Agents -- SIGGRAPH Course #24 Course Speakers Norman Badler (U. Pennsylvania) Norman Badler (U. Pennsylvania) Justine Cassell (MIT Media Lab) Justine Cassell (MIT Media Lab) John Funge (Sony Computer Entertainment America) John Funge (Sony Computer Entertainment America) Jeff Rickel (ISI / U. So. California) Jeff Rickel (ISI / U. So. California) Bruce Blumberg (MIT Media Lab) Bruce Blumberg (MIT Media Lab) Norman Badler (U. Pennsylvania) Norman Badler (U. Pennsylvania) Justine Cassell (MIT Media Lab) Justine Cassell (MIT Media Lab) John Funge (Sony Computer Entertainment America) John Funge (Sony Computer Entertainment America) Jeff Rickel (ISI / U. So. California) Jeff Rickel (ISI / U. So. California) Bruce Blumberg (MIT Media Lab) Bruce Blumberg (MIT Media Lab)

3 July 24, 20003Smart Animated Agents -- SIGGRAPH Course #24 Course Schedule (a.m.) 8:30-8:35 Badler (Introduction) 8:35-10:00 Badler 10:00-10:15 (break) 10:15-11:45 Cassell 11:45-12:00 Questions and Issues 12:00-1:30 (lunch) 8:30-8:35 Badler (Introduction) 8:35-10:00 Badler 10:00-10:15 (break) 10:15-11:45 Cassell 11:45-12:00 Questions and Issues 12:00-1:30 (lunch)

4 July 24, 20004Smart Animated Agents -- SIGGRAPH Course #24 Course Schedule (p.m.) 1:30-2:30 Funge 2:30-3:00 Rickel 3:00-3:15 (break) 3:15-3:45 Rickel 3:45-4:45 Blumberg 4:45-5:00 Questions and Issues 1:30-2:30 Funge 2:30-3:00 Rickel 3:00-3:15 (break) 3:15-3:45 Rickel 3:45-4:45 Blumberg 4:45-5:00 Questions and Issues

5 July 24, 20005Smart Animated Agents -- SIGGRAPH Course #24 Course #24 Topics Action Primitives and Action Representation Natural Language Interfaces Conversational and Communicative Agents Cognitive Modeling Pedagogical Agents Task-Oriented Collaboration Learning Action Primitives and Action Representation Natural Language Interfaces Conversational and Communicative Agents Cognitive Modeling Pedagogical Agents Task-Oriented Collaboration Learning

6 July 24, 20006Smart Animated Agents -- SIGGRAPH Course #24 Building Smart Agents (Badler) Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation (PAR) Parameterized Action Representation (PAR) Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation (PAR) Parameterized Action Representation (PAR) Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs

7 July 24, 20007Smart Animated Agents -- SIGGRAPH Course #24 Building Smart Agents Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation (PAR) Parameterized Action Representation (PAR) Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation (PAR) Parameterized Action Representation (PAR) Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs

8 July 24, 20008Smart Animated Agents -- SIGGRAPH Course #24 Introduction to and Applications for Embodied Agents: Engineering Ergonomics. Engineering Ergonomics. Design and Maintenance Assessment. Design and Maintenance Assessment. Games/Special Effects. Games/Special Effects. Military Simulations. Military Simulations. Job Education/Training. Job Education/Training. Medical Simulations. Medical Simulations. Engineering Ergonomics. Engineering Ergonomics. Design and Maintenance Assessment. Design and Maintenance Assessment. Games/Special Effects. Games/Special Effects. Military Simulations. Military Simulations. Job Education/Training. Job Education/Training. Medical Simulations. Medical Simulations.

9 July 24, 20009Smart Animated Agents -- SIGGRAPH Course #24 Virtual Human “Dimensions” Appearance Appearance Function Function Time Time Autonomy Autonomy Individuality Individuality Appearance Appearance Function Function Time Time Autonomy Autonomy Individuality Individuality

10 July 24, 200010Smart Animated Agents -- SIGGRAPH Course #24 Appearance: 2D drawings > 3D wireframe > 3D polyhedra > curved surfaces > freeform deformations > accurate surfaces > muscles, fat > biomechanics > clothing, equipment > physiological effects (perspiration, irritation, injury) 2D drawings > 3D wireframe > 3D polyhedra > curved surfaces > freeform deformations > accurate surfaces > muscles, fat > biomechanics > clothing, equipment > physiological effects (perspiration, irritation, injury)

11 July 24, 200011Smart Animated Agents -- SIGGRAPH Course #24 Function: cartoon > jointed skeleton > joint limits > strength limits > fatigue > hazards > injury > skills > effects of loads and stressors > psychological models > cognitive models > roles > teaming cartoon > jointed skeleton > joint limits > strength limits > fatigue > hazards > injury > skills > effects of loads and stressors > psychological models > cognitive models > roles > teaming

12 July 24, 200012Smart Animated Agents -- SIGGRAPH Course #24 Time (to create / move each): off-line animation > interactive manipulation > real-time motion playback > parameterized motion synthesis > multiple agents > crowds > coordinated teams off-line animation > interactive manipulation > real-time motion playback > parameterized motion synthesis > multiple agents > crowds > coordinated teams

13 July 24, 200013Smart Animated Agents -- SIGGRAPH Course #24 Autonomy: drawing > scripting > interacting > reacting > making decisions > communicating > intending > taking initiative > leading drawing > scripting > interacting > reacting > making decisions > communicating > intending > taking initiative > leading

14 July 24, 200014Smart Animated Agents -- SIGGRAPH Course #24 Individuality: generic character > hand-crafted character > cultural distinctions > sex and age > personality > psychological-physiological profiles > specific individual generic character > hand-crafted character > cultural distinctions > sex and age > personality > psychological-physiological profiles > specific individual

15 July 24, 200015Smart Animated Agents -- SIGGRAPH Course #24 Comparative Graphical Agents Application Appear. Function Time Autonomy Individ. Cartoonshigh low highlow high Sp. Effectshigh low highlow med Medicalhigh highmedmed med Ergonomicsmed highmedmed low Gameshigh low lowmed/high med Militarymed medlowmed/high low Educationmed low lowmed/high med Trainingmed low lowhigh med

16 July 24, 200016Smart Animated Agents -- SIGGRAPH Course #24 Building Smart Agents Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs

17 July 24, 200017Smart Animated Agents -- SIGGRAPH Course #24 Why Smart Avatars? For Motion Control Point-and-click (menu or direct 2D manipulation). Point-and-click (menu or direct 2D manipulation). Directly sensed (3D motion capture). Directly sensed (3D motion capture). Language commands (text or speech). Language commands (text or speech). Use instructions -- as if the agent were oneself or another real person: A Smart Avatar A Smart Avatar Point-and-click (menu or direct 2D manipulation). Point-and-click (menu or direct 2D manipulation). Directly sensed (3D motion capture). Directly sensed (3D motion capture). Language commands (text or speech). Language commands (text or speech). Use instructions -- as if the agent were oneself or another real person: A Smart Avatar A Smart Avatar

18 July 24, 200018Smart Animated Agents -- SIGGRAPH Course #24 Smart Agent Requirements Actions to Execute: Action Representation - What it can do.Action Representation - What it can do. Behavior Model: The agent’s decision-making, “thought,” and reaction processes - What it should do or wants to do.The agent’s decision-making, “thought,” and reaction processes - What it should do or wants to do. Inputs to Effect Behavior: Incoming knowledge about the outside world - What it needs to know.Incoming knowledge about the outside world - What it needs to know. Actions to Execute: Action Representation - What it can do.Action Representation - What it can do. Behavior Model: The agent’s decision-making, “thought,” and reaction processes - What it should do or wants to do.The agent’s decision-making, “thought,” and reaction processes - What it should do or wants to do. Inputs to Effect Behavior: Incoming knowledge about the outside world - What it needs to know.Incoming knowledge about the outside world - What it needs to know.

19 July 24, 200019Smart Animated Agents -- SIGGRAPH Course #24 “Classic” AI: Agent Action Cycle Sense Act Control Agent model Messages Sensors Situation World model

20 July 24, 200020Smart Animated Agents -- SIGGRAPH Course #24 Smart Agent Requirements Actions to execute: Action Representation - What it can do.Action Representation - What it can do. Behavior Model: The agent’s decision-making, “thought,” and reaction processes.The agent’s decision-making, “thought,” and reaction processes. Inputs to Effect Behavior: Incoming knowledge about the outside world.Incoming knowledge about the outside world. Actions to execute: Action Representation - What it can do.Action Representation - What it can do. Behavior Model: The agent’s decision-making, “thought,” and reaction processes.The agent’s decision-making, “thought,” and reaction processes. Inputs to Effect Behavior: Incoming knowledge about the outside world.Incoming knowledge about the outside world.

21 July 24, 200021Smart Animated Agents -- SIGGRAPH Course #24 4 Levels of Action Representation 0: Basic Motion Generators 0: Basic Motion Generators 1: Parallel -Transition Networks 1: Parallel -Transition Networks 2: Parameterized Actions 2: Parameterized Actions 3: Natural Language Instructions 3: Natural Language Instructions 0: Basic Motion Generators 0: Basic Motion Generators 1: Parallel -Transition Networks 1: Parallel -Transition Networks 2: Parameterized Actions 2: Parameterized Actions 3: Natural Language Instructions 3: Natural Language Instructions

22 July 24, 200022Smart Animated Agents -- SIGGRAPH Course #24 Level 0: Basic Human Movement Capabilities Gesture / Reach / Grasp. Gesture / Reach / Grasp. Walk / Turn / Climb. Walk / Turn / Climb. Posture Transitions (Sit / Stand) Posture Transitions (Sit / Stand) Visual Attention / Search. Visual Attention / Search. Pull / Lift / Carry. Pull / Lift / Carry. Motion playback (captured or scripted). Motion playback (captured or scripted). ‘Noise’ or secondary movements. ‘Noise’ or secondary movements. Gesture / Reach / Grasp. Gesture / Reach / Grasp. Walk / Turn / Climb. Walk / Turn / Climb. Posture Transitions (Sit / Stand) Posture Transitions (Sit / Stand) Visual Attention / Search. Visual Attention / Search. Pull / Lift / Carry. Pull / Lift / Carry. Motion playback (captured or scripted). Motion playback (captured or scripted). ‘Noise’ or secondary movements. ‘Noise’ or secondary movements.

23 July 24, 200023Smart Animated Agents -- SIGGRAPH Course #24 Synthesized Motions -- Leverage Economy of Expression Few parameters controlling many: Inverse kinematics for arms, legs, spine. Inverse kinematics for arms, legs, spine. Paths or footsteps driving locomotion. Paths or footsteps driving locomotion. Balance constraint on whole body. Balance constraint on whole body. Dynamics control from forces and torques. Dynamics control from forces and torques. Facial expressions Facial expressions Few parameters controlling many: Inverse kinematics for arms, legs, spine. Inverse kinematics for arms, legs, spine. Paths or footsteps driving locomotion. Paths or footsteps driving locomotion. Balance constraint on whole body. Balance constraint on whole body. Dynamics control from forces and torques. Dynamics control from forces and torques. Facial expressions Facial expressions

24 July 24, 200024Smart Animated Agents -- SIGGRAPH Course #24 Smart Agent Requirements Actions to execute: Action Representation.Action Representation. Behavior Model: The agent’s decision-making, “thought,” and reaction processes - What it should do or wants to do.The agent’s decision-making, “thought,” and reaction processes - What it should do or wants to do. Inputs to Effect Behavior: Incoming knowledge about the outside world.Incoming knowledge about the outside world. Actions to execute: Action Representation.Action Representation. Behavior Model: The agent’s decision-making, “thought,” and reaction processes - What it should do or wants to do.The agent’s decision-making, “thought,” and reaction processes - What it should do or wants to do. Inputs to Effect Behavior: Incoming knowledge about the outside world.Incoming knowledge about the outside world.

25 July 24, 200025Smart Animated Agents -- SIGGRAPH Course #24 Raise Level of Behavioral Control from Level 0 AnimNL project (~1988-1994): “Go into the kitchen and get me the coffee urn” (manual scripting of actions) “Go into the kitchen and get me the coffee urn” (manual scripting of actions) SodaJack (action planner + object specific reasoner) SodaJack (action planner + object specific reasoner) Needed a better underlying paradigm upon which to build smarter agents. AnimNL project (~1988-1994): “Go into the kitchen and get me the coffee urn” (manual scripting of actions) “Go into the kitchen and get me the coffee urn” (manual scripting of actions) SodaJack (action planner + object specific reasoner) SodaJack (action planner + object specific reasoner) Needed a better underlying paradigm upon which to build smarter agents.

26 July 24, 200026Smart Animated Agents -- SIGGRAPH Course #24 Level 1: Parallel Transition Networks (PaT-Nets) A Virtual Parallel Execution Engine for agent actions (a.k.a. Finite State Machines): Processes are nodes. Processes are nodes. Instantaneous (conditional or probabilistic) transitions are edges. Instantaneous (conditional or probabilistic) transitions are edges. Hierarchic. Hierarchic. Message passing and synchronization. Message passing and synchronization. Emerging common paradigm for agent control. Emerging common paradigm for agent control. A Virtual Parallel Execution Engine for agent actions (a.k.a. Finite State Machines): Processes are nodes. Processes are nodes. Instantaneous (conditional or probabilistic) transitions are edges. Instantaneous (conditional or probabilistic) transitions are edges. Hierarchic. Hierarchic. Message passing and synchronization. Message passing and synchronization. Emerging common paradigm for agent control. Emerging common paradigm for agent control.

27 July 24, 200027Smart Animated Agents -- SIGGRAPH Course #24 PaT-Net Applications Conversational agents. (SIGGRAPH ‘94) Conversational agents. (SIGGRAPH ‘94) Hide and seek. (VRAIS ‘96) Hide and seek. (VRAIS ‘96) MediSim: Physiological models. (Presence ‘96) MediSim: Physiological models. (Presence ‘96) Jack Presenter. (AAAI-97 Workshop/IEEE CG&A) Jack Presenter. (AAAI-97 Workshop/IEEE CG&A) Delsarte Presenter. (Pacific Graphics ‘98) Delsarte Presenter. (Pacific Graphics ‘98) JackMOO. (WebSim ‘98, VR ‘99) JackMOO. (WebSim ‘98, VR ‘99) AVA (Attention). (Autonomous Agents ‘99) AVA (Attention). (Autonomous Agents ‘99) Conversational agents. (SIGGRAPH ‘94) Conversational agents. (SIGGRAPH ‘94) Hide and seek. (VRAIS ‘96) Hide and seek. (VRAIS ‘96) MediSim: Physiological models. (Presence ‘96) MediSim: Physiological models. (Presence ‘96) Jack Presenter. (AAAI-97 Workshop/IEEE CG&A) Jack Presenter. (AAAI-97 Workshop/IEEE CG&A) Delsarte Presenter. (Pacific Graphics ‘98) Delsarte Presenter. (Pacific Graphics ‘98) JackMOO. (WebSim ‘98, VR ‘99) JackMOO. (WebSim ‘98, VR ‘99) AVA (Attention). (Autonomous Agents ‘99) AVA (Attention). (Autonomous Agents ‘99)

28 July 24, 200028Smart Animated Agents -- SIGGRAPH Course #24 What’s Missing? PaT-Nets are effective but hand-coded. PaT-Nets are effective but hand-coded. No matter what artificial language we introduce it is not the way people conceptualize the situation. (Badler/Webber) No matter what artificial language we introduce it is not the way people conceptualize the situation. (Badler/Webber) Connect language and animation through an intermediate level --- Connect language and animation through an intermediate level --- PaT-Nets are effective but hand-coded. PaT-Nets are effective but hand-coded. No matter what artificial language we introduce it is not the way people conceptualize the situation. (Badler/Webber) No matter what artificial language we introduce it is not the way people conceptualize the situation. (Badler/Webber) Connect language and animation through an intermediate level --- Connect language and animation through an intermediate level ---

29 July 24, 200029Smart Animated Agents -- SIGGRAPH Course #24 Level 2: Parameterized Action Representation (PAR) Derived from BOTH Natural Language analyses and animation requirements: Derived from BOTH Natural Language analyses and animation requirements: –Agent, Objects, Sub-Actions. –Preparatory Specifications, Postconditions. –Applicability and Termination Conditions. –Purpose (Achieve, Generate, Enable). –Path, Duration, Motion, Force. –Agent Manner. Derived from BOTH Natural Language analyses and animation requirements: Derived from BOTH Natural Language analyses and animation requirements: –Agent, Objects, Sub-Actions. –Preparatory Specifications, Postconditions. –Applicability and Termination Conditions. –Purpose (Achieve, Generate, Enable). –Path, Duration, Motion, Force. –Agent Manner.

30 July 24, 200030Smart Animated Agents -- SIGGRAPH Course #24 Level 3: Natural Language Instructions Instructions say what to do. Instructions say what to do. Instructions depend on underlying action skills. Instructions depend on underlying action skills. Instructions build agent behaviors (future actions or standing orders). Instructions build agent behaviors (future actions or standing orders). Instructions say what to do. Instructions say what to do. Instructions depend on underlying action skills. Instructions depend on underlying action skills. Instructions build agent behaviors (future actions or standing orders). Instructions build agent behaviors (future actions or standing orders).

31 July 24, 200031Smart Animated Agents -- SIGGRAPH Course #24 Integrated Approach: The JackMOO Testbed JackMOO goal: Create a multi-user, shared, 3D virtual environment with full body avatars and autonomous human agents, language- based commands, and low network bandwidth. JackMOO goal: Create a multi-user, shared, 3D virtual environment with full body avatars and autonomous human agents, language- based commands, and low network bandwidth. Based on lambdaMOO engine with Jack 3D environment and simple imperative commands. Based on lambdaMOO engine with Jack 3D environment and simple imperative commands. JackMOO goal: Create a multi-user, shared, 3D virtual environment with full body avatars and autonomous human agents, language- based commands, and low network bandwidth. JackMOO goal: Create a multi-user, shared, 3D virtual environment with full body avatars and autonomous human agents, language- based commands, and low network bandwidth. Based on lambdaMOO engine with Jack 3D environment and simple imperative commands. Based on lambdaMOO engine with Jack 3D environment and simple imperative commands.

32 July 24, 200032Smart Animated Agents -- SIGGRAPH Course #24

33 July 24, 200033Smart Animated Agents -- SIGGRAPH Course #24 JackMOO Smart Avatar Experiments Greetings (Gender and culture specific) Greetings (Gender and culture specific) Go to … (Sit in chair; Go to bed; Leave) Go to … (Sit in chair; Go to bed; Leave) — Do unspecified but necessary preparatory actions. Relationships (Follow me) Relationships (Follow me) — Mutual agreement. Autonomous Agents (Waiter) Autonomous Agents (Waiter) — Reacts to environment & states of other agents. Greetings (Gender and culture specific) Greetings (Gender and culture specific) Go to … (Sit in chair; Go to bed; Leave) Go to … (Sit in chair; Go to bed; Leave) — Do unspecified but necessary preparatory actions. Relationships (Follow me) Relationships (Follow me) — Mutual agreement. Autonomous Agents (Waiter) Autonomous Agents (Waiter) — Reacts to environment & states of other agents.

34 July 24, 200034Smart Animated Agents -- SIGGRAPH Course #24 Expanding the Agent Model Create individuals or specific people. Create individuals or specific people. Link perceptions of context to action. Link perceptions of context to action. Embed action planning capabilities. Embed action planning capabilities. Add emotional planner. Add emotional planner. Agent = Intentions X Personality X Emotions X Context X History X Capabilities X … Create individuals or specific people. Create individuals or specific people. Link perceptions of context to action. Link perceptions of context to action. Embed action planning capabilities. Embed action planning capabilities. Add emotional planner. Add emotional planner. Agent = Intentions X Personality X Emotions X Context X History X Capabilities X …

35 July 24, 200035Smart Animated Agents -- SIGGRAPH Course #24 Adapt the OCC Model of Agent Emotional Response Consequences of events: Consequences of events: –Consequences for self –Consequences for others Actions of agents: Actions of agents: –Self –Others Actions of objects. Actions of objects. Consequences of events: Consequences of events: –Consequences for self –Consequences for others Actions of agents: Actions of agents: –Self –Others Actions of objects. Actions of objects.

36 July 24, 200036Smart Animated Agents -- SIGGRAPH Course #24 Valenced Reaction To Consequences of EventsActions of AgentsAspects of Objects pleased, displeasedapproving, disapprovingliking, disliking FOCUSING ON Consequences for Others Consequences for Self DesirableUndesirable Self Agent Other Agent Happy-for Gloating Resentment Pity FORTUNES OF OTHERS Prospects Relevant Prospects Irrelevant Hope Fear Confirmed Disconfirmed Satisfaction Relief Fears-confirmed Disappointment PROSPECT-BASED Joy Distress WELL-BEING Pride Admiration Shame Reproach ATTRIBUTION Love Hate ATTRACTION Gratification Gratitude Remorse Anger WELL-BEING/ATTRIBUTION COMPOUNDS OCC Model of Emotions

37 July 24, 200037Smart Animated Agents -- SIGGRAPH Course #24 Smart Agent Requirements Actions to execute: Action Representation.Action Representation. Behavior Model: The agent’s decision-making, “thought,” and reaction processes.The agent’s decision-making, “thought,” and reaction processes. Inputs to Effect Behavior: Incoming knowledge about the outside world - What it needs to know.Incoming knowledge about the outside world - What it needs to know. Actions to execute: Action Representation.Action Representation. Behavior Model: The agent’s decision-making, “thought,” and reaction processes.The agent’s decision-making, “thought,” and reaction processes. Inputs to Effect Behavior: Incoming knowledge about the outside world - What it needs to know.Incoming knowledge about the outside world - What it needs to know.

38 July 24, 200038Smart Animated Agents -- SIGGRAPH Course #24 Response Requires Input Sensing the state of events: Sensing the state of events: –Self (action postconditions) –Others (messages; observations) Sensing the actions of agents: Sensing the actions of agents: –Self knowledge (what am I doing) –Others (messages; observations) Sensing the actions of objects. Sensing the actions of objects. –Smart objects Sensing the state of events: Sensing the state of events: –Self (action postconditions) –Others (messages; observations) Sensing the actions of agents: Sensing the actions of agents: –Self knowledge (what am I doing) –Others (messages; observations) Sensing the actions of objects. Sensing the actions of objects. –Smart objects

39 July 24, 200039Smart Animated Agents -- SIGGRAPH Course #24 Input Sensing Message passing. Message passing. –Explicit transfer or direct knowledge of state information between agents. Artificial perception. Artificial perception. –Visual/auditory/haptic [collision detection] sensing to attend to and observe local context. Situation awareness. Situation awareness. –Recognizing complex relationships. Message passing. Message passing. –Explicit transfer or direct knowledge of state information between agents. Artificial perception. Artificial perception. –Visual/auditory/haptic [collision detection] sensing to attend to and observe local context. Situation awareness. Situation awareness. –Recognizing complex relationships.

40 July 24, 200040Smart Animated Agents -- SIGGRAPH Course #24 Training the Agent Model Sense Act Control Agent model Hand-coded procedures. Rule-based systems. Natural Language instructions. By example (demonstration).

41 July 24, 200041Smart Animated Agents -- SIGGRAPH Course #24 Building Smart Agents Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs

42 July 24, 200042Smart Animated Agents -- SIGGRAPH Course #24 Recall: Parameterized Action Representation (PAR) Representation derived from BOTH NL analyses and animation requirements: Representation derived from BOTH NL analyses and animation requirements: –Agent, Objects, Sub-Actions. –Preparatory Specifications, Postconditions. –Applicability and Termination Conditions. –Purpose (Achieve, Generate, Enable). –Path, Duration, Motion, Force. –Agent Manner. Representation derived from BOTH NL analyses and animation requirements: Representation derived from BOTH NL analyses and animation requirements: –Agent, Objects, Sub-Actions. –Preparatory Specifications, Postconditions. –Applicability and Termination Conditions. –Purpose (Achieve, Generate, Enable). –Path, Duration, Motion, Force. –Agent Manner.

43 July 24, 200043Smart Animated Agents -- SIGGRAPH Course #24

44 July 24, 200044Smart Animated Agents -- SIGGRAPH Course #24 Examples of PAR Action Fragments Preparatory Specifications: Preparatory Specifications: –If not at proper location to execute action, get there and get into correct pose to continue. Applicability Conditions: Applicability Conditions: –In order to use a gun, the agent must have one; he does not have to go find one. Termination Conditions: Termination Conditions: –“Draw gun” terminates when the gun is no longer in the holster. Preparatory Specifications: Preparatory Specifications: –If not at proper location to execute action, get there and get into correct pose to continue. Applicability Conditions: Applicability Conditions: –In order to use a gun, the agent must have one; he does not have to go find one. Termination Conditions: Termination Conditions: –“Draw gun” terminates when the gun is no longer in the holster.

45 July 24, 200045Smart Animated Agents -- SIGGRAPH Course #24 Examples of PAR Action Fragments Path parameters: Path parameters: –Walk to a given location. –Reach to a given place. Agent manner: Agent manner: –Walk style. –Expressive content (EMOTE parameters). Postconditions: Postconditions: –After “receive object” action terminates, agent has object. Path parameters: Path parameters: –Walk to a given location. –Reach to a given place. Agent manner: Agent manner: –Walk style. –Expressive content (EMOTE parameters). Postconditions: Postconditions: –After “receive object” action terminates, agent has object.

46 July 24, 200046Smart Animated Agents -- SIGGRAPH Course #24 Case Study: The Virtual Reality Checkpoint Trainer Joint ONR Project between UPenn, UHouston, and EAI. Joint ONR Project between UPenn, UHouston, and EAI. Multi-agent and/or avatar situation. Multi-agent and/or avatar situation. Process simulator for traffic. Process simulator for traffic. Autonomous agents. Autonomous agents. Real-time behaviors and reactions. Real-time behaviors and reactions. Natural Language input for “standing orders”. Natural Language input for “standing orders”. (Next step: Live trainees in VR.) (Next step: Live trainees in VR.) Joint ONR Project between UPenn, UHouston, and EAI. Joint ONR Project between UPenn, UHouston, and EAI. Multi-agent and/or avatar situation. Multi-agent and/or avatar situation. Process simulator for traffic. Process simulator for traffic. Autonomous agents. Autonomous agents. Real-time behaviors and reactions. Real-time behaviors and reactions. Natural Language input for “standing orders”. Natural Language input for “standing orders”. (Next step: Live trainees in VR.) (Next step: Live trainees in VR.)

47 July 24, 200047Smart Animated Agents -- SIGGRAPH Course #24 Virtual Environment

48 July 24, 200048Smart Animated Agents -- SIGGRAPH Course #24 The Checkpoint Scene

49 July 24, 200049Smart Animated Agents -- SIGGRAPH Course #24 Components of the Checkpoint Scenario The PAR system architecture. The PAR system architecture. Agents and behavior rules. Agents and behavior rules. The Actionary. The Actionary. Actions currently represented. Actions currently represented. Python implementation. Python implementation. Natural Language inputs. Natural Language inputs. But first, the video. But first, the video. The PAR system architecture. The PAR system architecture. Agents and behavior rules. Agents and behavior rules. The Actionary. The Actionary. Actions currently represented. Actions currently represented. Python implementation. Python implementation. Natural Language inputs. Natural Language inputs. But first, the video. But first, the video.

50 July 24, 200050Smart Animated Agents -- SIGGRAPH Course #24 NL2PARNL2PAR ExecutionEngine PAR SYSTEM ARCHITECTURE Visualizer Jack Toolkit Motion Generators RuleManager Actionary ActionsObjects Agent Proc 1 Agent Proc 1 Queue Manager Process Manager Agent Proc 2 Agent Proc 2 Queue Manager Process Manager Agent Proc N Agent Proc N Queue Manager Process Manager

51 July 24, 200051Smart Animated Agents -- SIGGRAPH Course #24 Execution Engine Main system control loop. Main system control loop. Maintains global timer for synchronization. Maintains global timer for synchronization. Inputs NL instructions. Inputs NL instructions. Outputs scene updates. Outputs scene updates. Main system control loop. Main system control loop. Maintains global timer for synchronization. Maintains global timer for synchronization. Inputs NL instructions. Inputs NL instructions. Outputs scene updates. Outputs scene updates.

52 July 24, 200052Smart Animated Agents -- SIGGRAPH Course #24 PAR Representations UPAR (Uninstantiated PAR): Contains default applicability conditions, preparatory specifications, execution steps; stored in the Actionary TM. IPAR (Instantiated PAR): UPAR instantiated with specific information on agent, physical objects, manner, termination conditions, etc. UPAR (Uninstantiated PAR): Contains default applicability conditions, preparatory specifications, execution steps; stored in the Actionary TM. IPAR (Instantiated PAR): UPAR instantiated with specific information on agent, physical objects, manner, termination conditions, etc.

53 July 24, 200053Smart Animated Agents -- SIGGRAPH Course #24 NL2PARNL2PAR ExecutionEngine PAR SYSTEM ARCHITECTURE Visualizer Jack Toolkit Motion Generators RuleManager Actionary ActionsObjects Agent Proc 1 Agent Proc 1 Queue Manager Process Manager Agent Proc 2 Agent Proc 2 Queue Manager Process Manager Agent Proc N Agent Proc N Queue Manager Process Manager

54 July 24, 200054Smart Animated Agents -- SIGGRAPH Course #24 Agent Process Queue Manager maintains a priority-based, multi-layered, preemptive queue of all the IPARs to be executed by the agent. Queue Manager maintains a priority-based, multi-layered, preemptive queue of all the IPARs to be executed by the agent. Process Manager processes each IPAR, checks terminations, applicability, preparatory specs, and execution steps. Process Manager processes each IPAR, checks terminations, applicability, preparatory specs, and execution steps. Trigger actions based on emotion and context. Trigger actions based on emotion and context. Queue Manager maintains a priority-based, multi-layered, preemptive queue of all the IPARs to be executed by the agent. Queue Manager maintains a priority-based, multi-layered, preemptive queue of all the IPARs to be executed by the agent. Process Manager processes each IPAR, checks terminations, applicability, preparatory specs, and execution steps. Process Manager processes each IPAR, checks terminations, applicability, preparatory specs, and execution steps. Trigger actions based on emotion and context. Trigger actions based on emotion and context.

55 July 24, 200055Smart Animated Agents -- SIGGRAPH Course #24 Agent Rule Manager Relays IPARs generated, for “immediate instructions,” by the NL2PAR module to the correct Agent Process. Relays IPARs generated, for “immediate instructions,” by the NL2PAR module to the correct Agent Process. Stores translated “standing orders” as complex rules in a rule table. Stores translated “standing orders” as complex rules in a rule table. Evaluates the rules at each frame of the simulation and sends the generated IPARs, if any, to the appropriate Agent Process. Evaluates the rules at each frame of the simulation and sends the generated IPARs, if any, to the appropriate Agent Process. Relays IPARs generated, for “immediate instructions,” by the NL2PAR module to the correct Agent Process. Relays IPARs generated, for “immediate instructions,” by the NL2PAR module to the correct Agent Process. Stores translated “standing orders” as complex rules in a rule table. Stores translated “standing orders” as complex rules in a rule table. Evaluates the rules at each frame of the simulation and sends the generated IPARs, if any, to the appropriate Agent Process. Evaluates the rules at each frame of the simulation and sends the generated IPARs, if any, to the appropriate Agent Process.

56 July 24, 200056Smart Animated Agents -- SIGGRAPH Course #24 NL2PARNL2PAR ExecutionEngine PAR SYSTEM ARCHITECTURE Visualizer Jack Toolkit Motion Generators RuleManager Actionary ActionsObjects Agent Proc 1 Agent Proc 1 Queue Manager Process Manager Agent Proc 2 Agent Proc 2 Queue Manager Process Manager Agent Proc N Agent Proc N Queue Manager Process Manager

57 July 24, 200057Smart Animated Agents -- SIGGRAPH Course #24 The Actionary™ Links natural language and actions. Links natural language and actions. Holds persistent definitions (database) of actions as UPARs. Holds persistent definitions (database) of actions as UPARs. Constructed through GUI or (eventually) natural language input. Constructed through GUI or (eventually) natural language input. Goes beyond motion capture libraries. Goes beyond motion capture libraries. Makes use of PaT-Nets and all lower level motion generation tools during execution. Makes use of PaT-Nets and all lower level motion generation tools during execution. Links natural language and actions. Links natural language and actions. Holds persistent definitions (database) of actions as UPARs. Holds persistent definitions (database) of actions as UPARs. Constructed through GUI or (eventually) natural language input. Constructed through GUI or (eventually) natural language input. Goes beyond motion capture libraries. Goes beyond motion capture libraries. Makes use of PaT-Nets and all lower level motion generation tools during execution. Makes use of PaT-Nets and all lower level motion generation tools during execution.

58 July 24, 200058Smart Animated Agents -- SIGGRAPH Course #24 PAR Tree for Checkpoint Actions Tree showing current Actionary of PARs for checkpoint scene.

59 July 24, 200059Smart Animated Agents -- SIGGRAPH Course #24 Actions in Detail

60 July 24, 200060Smart Animated Agents -- SIGGRAPH Course #24 Actionary of defined PARs and objects is coded in Python. Actionary of defined PARs and objects is coded in Python. The execution steps of a PAR defined in a Python script are interpreted and dynamically expanded into C++ PaT-Nets. The execution steps of a PAR defined in a Python script are interpreted and dynamically expanded into C++ PaT-Nets. A PAR can be completely created from the GUI and dynamically added to the working memory. A PAR can be completely created from the GUI and dynamically added to the working memory. Actionary of defined PARs and objects is coded in Python. Actionary of defined PARs and objects is coded in Python. The execution steps of a PAR defined in a Python script are interpreted and dynamically expanded into C++ PaT-Nets. The execution steps of a PAR defined in a Python script are interpreted and dynamically expanded into C++ PaT-Nets. A PAR can be completely created from the GUI and dynamically added to the working memory. A PAR can be completely created from the GUI and dynamically added to the working memory. Python Integration in PAR (1)

61 July 24, 200061Smart Animated Agents -- SIGGRAPH Course #24 Python Integration in PAR (2) Actionary database loaded into working memory during each application run. Actionary database loaded into working memory during each application run. Specifications of the conditional properties (termination, applicability, preparatory) of a PAR are stored as Python scripts and can be easily altered on the fly. Specifications of the conditional properties (termination, applicability, preparatory) of a PAR are stored as Python scripts and can be easily altered on the fly. Actionary database loaded into working memory during each application run. Actionary database loaded into working memory during each application run. Specifications of the conditional properties (termination, applicability, preparatory) of a PAR are stored as Python scripts and can be easily altered on the fly. Specifications of the conditional properties (termination, applicability, preparatory) of a PAR are stored as Python scripts and can be easily altered on the fly.

62 July 24, 200062Smart Animated Agents -- SIGGRAPH Course #24 Building Smart Agents Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs

63 July 24, 200063Smart Animated Agents -- SIGGRAPH Course #24 Why a Natural Language Interface? Give complex commands (when a menu just won’t do!) Instructions with conjunctions and relative clauses: Instructions with conjunctions and relative clauses: –“Go to the closet Susan opened and get a flashlight.” Give complex commands (when a menu just won’t do!) Instructions with conjunctions and relative clauses: Instructions with conjunctions and relative clauses: –“Go to the closet Susan opened and get a flashlight.”

64 July 24, 200064Smart Animated Agents -- SIGGRAPH Course #24 Provide Information Answer questions about the virtual environment, other agents, or the tasks: Answer questions about the virtual environment, other agents, or the tasks: – “What is now in the tool box?” – “Where is Sam going?” – “Can Lucy see Charlie?” – “What’s the next step in this procedure?” – “When do I stop doing this?” Answer questions about the virtual environment, other agents, or the tasks: Answer questions about the virtual environment, other agents, or the tasks: – “What is now in the tool box?” – “Where is Sam going?” – “Can Lucy see Charlie?” – “What’s the next step in this procedure?” – “When do I stop doing this?”

65 July 24, 200065Smart Animated Agents -- SIGGRAPH Course #24 Give “Standing Orders” Provide persistent instructions that depend on trigger conditions: Provide persistent instructions that depend on trigger conditions: – “When the door opens, go inside.” – “If someone’s glass is empty, fill it.” – “Drink only from your own glass.” – “If the driver has a gun, run for cover.” More examples in the video. More examples in the video. Provide persistent instructions that depend on trigger conditions: Provide persistent instructions that depend on trigger conditions: – “When the door opens, go inside.” – “If someone’s glass is empty, fill it.” – “Drink only from your own glass.” – “If the driver has a gun, run for cover.” More examples in the video. More examples in the video.

66 July 24, 200066Smart Animated Agents -- SIGGRAPH Course #24 Natural Language Instructions Uses XTAG Tree Adjoining Grammar: parser w/ broad coverage English grammar. Uses XTAG Tree Adjoining Grammar: parser w/ broad coverage English grammar. XTAG translates parse trees into PARs. XTAG translates parse trees into PARs. Uses modeled environment to choose correct lexical semantics (sense disambiguation and reference binding). Uses modeled environment to choose correct lexical semantics (sense disambiguation and reference binding). Instructions build agent behaviors (future actions or standing orders). Instructions build agent behaviors (future actions or standing orders). Uses XTAG Tree Adjoining Grammar: parser w/ broad coverage English grammar. Uses XTAG Tree Adjoining Grammar: parser w/ broad coverage English grammar. XTAG translates parse trees into PARs. XTAG translates parse trees into PARs. Uses modeled environment to choose correct lexical semantics (sense disambiguation and reference binding). Uses modeled environment to choose correct lexical semantics (sense disambiguation and reference binding). Instructions build agent behaviors (future actions or standing orders). Instructions build agent behaviors (future actions or standing orders).

67 July 24, 200067Smart Animated Agents -- SIGGRAPH Course #24 Building Smart Agents Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs

68 July 24, 200068Smart Animated Agents -- SIGGRAPH Course #24 Working Toward Human-ness: Two Hypotheses Better understanding of real human movement ought to increase the naturalness of embodied agent behaviors. Better understanding of real human movement ought to increase the naturalness of embodied agent behaviors. Human body motor systems work in integrated ways, controlled by high level goals and intentions and only weakly accessible to deliberate intervention. Human body motor systems work in integrated ways, controlled by high level goals and intentions and only weakly accessible to deliberate intervention. “We can’t help how we act.” “We can’t help how we act.” Better understanding of real human movement ought to increase the naturalness of embodied agent behaviors. Better understanding of real human movement ought to increase the naturalness of embodied agent behaviors. Human body motor systems work in integrated ways, controlled by high level goals and intentions and only weakly accessible to deliberate intervention. Human body motor systems work in integrated ways, controlled by high level goals and intentions and only weakly accessible to deliberate intervention. “We can’t help how we act.” “We can’t help how we act.”

69 July 24, 200069Smart Animated Agents -- SIGGRAPH Course #24 How to we Realize these Hypotheses? Use Cognitive Science models and data if possible. Use Cognitive Science models and data if possible. Use experience and models from human motion experts to develop appropriate character motions. Use experience and models from human motion experts to develop appropriate character motions. Drive character from within, not just by reaction. Drive character from within, not just by reaction. Build more integrated motion controllers. Build more integrated motion controllers. Use Cognitive Science models and data if possible. Use Cognitive Science models and data if possible. Use experience and models from human motion experts to develop appropriate character motions. Use experience and models from human motion experts to develop appropriate character motions. Drive character from within, not just by reaction. Drive character from within, not just by reaction. Build more integrated motion controllers. Build more integrated motion controllers.

70 July 24, 200070Smart Animated Agents -- SIGGRAPH Course #24 Building Smart Agents Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs

71 July 24, 200071Smart Animated Agents -- SIGGRAPH Course #24 AVA: Automated Visual Attending (Sonu Chopra) Input: Input: List of cognitive and motor tasks (e.g., walk to table, search for target, monitor object).List of cognitive and motor tasks (e.g., walk to table, search for target, monitor object).Output: Animation of character’s head, eye, and body movements. Attending behavior emerges as a competition between deliberate, involuntary and spontaneous attention.Animation of character’s head, eye, and body movements. Attending behavior emerges as a competition between deliberate, involuntary and spontaneous attention. Input: Input: List of cognitive and motor tasks (e.g., walk to table, search for target, monitor object).List of cognitive and motor tasks (e.g., walk to table, search for target, monitor object).Output: Animation of character’s head, eye, and body movements. Attending behavior emerges as a competition between deliberate, involuntary and spontaneous attention.Animation of character’s head, eye, and body movements. Attending behavior emerges as a competition between deliberate, involuntary and spontaneous attention.

72 July 24, 200072Smart Animated Agents -- SIGGRAPH Course #24 Purpose Visual Attention is an important characteristic of human activity: Fills in unspecified “behavioral detail”.Fills in unspecified “behavioral detail”. Models competing events, increasing cognitive load and visual idling.Models competing events, increasing cognitive load and visual idling. Modifies motor activity based on visual inputs.Modifies motor activity based on visual inputs. Interleaves eye and motor behaviors.Interleaves eye and motor behaviors. Visual Attention is an important characteristic of human activity: Fills in unspecified “behavioral detail”.Fills in unspecified “behavioral detail”. Models competing events, increasing cognitive load and visual idling.Models competing events, increasing cognitive load and visual idling. Modifies motor activity based on visual inputs.Modifies motor activity based on visual inputs. Interleaves eye and motor behaviors.Interleaves eye and motor behaviors.

73 July 24, 200073Smart Animated Agents -- SIGGRAPH Course #24 Psychologically Motivated Model structure and inputs from: Cognitive psychology Cognitive psychology Biologically inspired models of computer vision Biologically inspired models of computer vision Human ergonomics Human ergonomics Implement as a PaT-Net: GazeNet. Model structure and inputs from: Cognitive psychology Cognitive psychology Biologically inspired models of computer vision Biologically inspired models of computer vision Human ergonomics Human ergonomics Implement as a PaT-Net: GazeNet.

74 July 24, 200074Smart Animated Agents -- SIGGRAPH Course #24 Cognitive Psychology -- Types of Attending Behavior Deliberate (voluntary or endogenous) attention Involuntary (exogenous and spontaneous) attention These compete for the attention resource.

75 July 24, 200075Smart Animated Agents -- SIGGRAPH Course #24 Types of Eye Behaviors Visual Search Visual Search Monitoring Monitoring –Locomotion –Visual Tracking –Limit Conditions Reach and Grasp Reach and Grasp Attention Capture by Peripheral Motion Attention Capture by Peripheral Motion Spontaneous Looking (Idling) Spontaneous Looking (Idling) Visual Search Visual Search Monitoring Monitoring –Locomotion –Visual Tracking –Limit Conditions Reach and Grasp Reach and Grasp Attention Capture by Peripheral Motion Attention Capture by Peripheral Motion Spontaneous Looking (Idling) Spontaneous Looking (Idling)

76 July 24, 200076Smart Animated Agents -- SIGGRAPH Course #24 Major Components of AVA GazeNet Queue of sites (or objects) related to deliberate tasks that are currently vying for attention -- IntentionList Queue of sites (or objects) related to deliberate tasks that are currently vying for attention -- IntentionList Queue of objects in an agent’s periphery that are moving -- Plist Queue of objects in an agent’s periphery that are moving -- Plist Spontaneous Looking -- look at locations with highest local feature contrast. Spontaneous Looking -- look at locations with highest local feature contrast. Queue of sites (or objects) related to deliberate tasks that are currently vying for attention -- IntentionList Queue of sites (or objects) related to deliberate tasks that are currently vying for attention -- IntentionList Queue of objects in an agent’s periphery that are moving -- Plist Queue of objects in an agent’s periphery that are moving -- Plist Spontaneous Looking -- look at locations with highest local feature contrast. Spontaneous Looking -- look at locations with highest local feature contrast.

77 July 24, 200077Smart Animated Agents -- SIGGRAPH Course #24 A r c h it e c t u r e Implement as a PaT-Net: GazeNet.

78 July 24, 200078Smart Animated Agents -- SIGGRAPH Course #24 Motor Activity Modified by Visual Input If attention captured by peripheral motion, continue to “deliberately track” object if on collision course. If attention captured by peripheral motion, continue to “deliberately track” object if on collision course. Slow down motion (walk or reach) in case of increasing cognitive load. Slow down motion (walk or reach) in case of increasing cognitive load. Increase response time to task targets with increasing (deliberate) load or in the presence of peripheral motion. Increase response time to task targets with increasing (deliberate) load or in the presence of peripheral motion. If attention captured by peripheral motion, continue to “deliberately track” object if on collision course. If attention captured by peripheral motion, continue to “deliberately track” object if on collision course. Slow down motion (walk or reach) in case of increasing cognitive load. Slow down motion (walk or reach) in case of increasing cognitive load. Increase response time to task targets with increasing (deliberate) load or in the presence of peripheral motion. Increase response time to task targets with increasing (deliberate) load or in the presence of peripheral motion.

79 July 24, 200079Smart Animated Agents -- SIGGRAPH Course #24 Peripheral Motion Sensor Sample for motion (by querying scene graph) those objects that fall in an agent’s peripheral field of view (between 10º and 90º horizontal and 10º and 65º vertical). Sample for motion (by querying scene graph) those objects that fall in an agent’s peripheral field of view (between 10º and 90º horizontal and 10º and 65º vertical). Add objects that move to Plist. Add objects that move to Plist. Sample for motion (by querying scene graph) those objects that fall in an agent’s peripheral field of view (between 10º and 90º horizontal and 10º and 65º vertical). Sample for motion (by querying scene graph) those objects that fall in an agent’s peripheral field of view (between 10º and 90º horizontal and 10º and 65º vertical). Add objects that move to Plist. Add objects that move to Plist.

80 July 24, 200080Smart Animated Agents -- SIGGRAPH Course #24 Peripheral Motion Sensor If moving object “captures” attention, agent estimates collision likelihood based on velocity and heading. If likely, deliberate tracking is performed. If moving object “captures” attention, agent estimates collision likelihood based on velocity and heading. If likely, deliberate tracking is performed. Presence increases response time to deliberate targets even if agent doesn’t overtly orient. Presence increases response time to deliberate targets even if agent doesn’t overtly orient. If moving object “captures” attention, agent estimates collision likelihood based on velocity and heading. If likely, deliberate tracking is performed. If moving object “captures” attention, agent estimates collision likelihood based on velocity and heading. If likely, deliberate tracking is performed. Presence increases response time to deliberate targets even if agent doesn’t overtly orient. Presence increases response time to deliberate targets even if agent doesn’t overtly orient.

81 July 24, 200081Smart Animated Agents -- SIGGRAPH Course #24 Interleaving and the TaskQ Manager Distinction between eye behavior and underlying agent motion allows for attention interleaving. If the agent is expert, eye behavior for the next task assigned to agent is initiated before prior motor activity is complete: e.g., looking at goal of reach before walk to destination complete.

82 July 24, 200082Smart Animated Agents -- SIGGRAPH Course #24 Potential Applications Human data (eye tracking studies) can be input to AVA and visualized in an embodied agent. Human data (eye tracking studies) can be input to AVA and visualized in an embodied agent. AVA can be extended to model situation awareness (when do critical events remain unattended) for: AVA can be extended to model situation awareness (when do critical events remain unattended) for: –games –real-time simulations Human data (eye tracking studies) can be input to AVA and visualized in an embodied agent. Human data (eye tracking studies) can be input to AVA and visualized in an embodied agent. AVA can be extended to model situation awareness (when do critical events remain unattended) for: AVA can be extended to model situation awareness (when do critical events remain unattended) for: –games –real-time simulations

83 July 24, 200083Smart Animated Agents -- SIGGRAPH Course #24 Building Smart Agents Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs

84 July 24, 200084Smart Animated Agents -- SIGGRAPH Course #24 Animating Agent Manner from PAR (Chi, Costa, Zhao) Motion control paradigm for a parameterized range of natural-looking movements. Motion control paradigm for a parameterized range of natural-looking movements. Based on Effort component of Rudolf Laban’s movement theory (LMA). Based on Effort component of Rudolf Laban’s movement theory (LMA). Proceduralizes qualitative aspects of movement while providing textual descriptors along just four dimensions. Proceduralizes qualitative aspects of movement while providing textual descriptors along just four dimensions. Motion control paradigm for a parameterized range of natural-looking movements. Motion control paradigm for a parameterized range of natural-looking movements. Based on Effort component of Rudolf Laban’s movement theory (LMA). Based on Effort component of Rudolf Laban’s movement theory (LMA). Proceduralizes qualitative aspects of movement while providing textual descriptors along just four dimensions. Proceduralizes qualitative aspects of movement while providing textual descriptors along just four dimensions.

85 July 24, 200085Smart Animated Agents -- SIGGRAPH Course #24 Effort Motion Factors Four factors range from an indulging extreme to a fighting extreme: Space: Indirect ------------------ Direct Space: Indirect ------------------ Direct Weight: Light --------------------- Strong Weight: Light --------------------- Strong Time: Sustained ------------- Sudden Time: Sustained ------------- Sudden Flow: Free --------------------- Bound Flow: Free --------------------- Bound Four factors range from an indulging extreme to a fighting extreme: Space: Indirect ------------------ Direct Space: Indirect ------------------ Direct Weight: Light --------------------- Strong Weight: Light --------------------- Strong Time: Sustained ------------- Sudden Time: Sustained ------------- Sudden Flow: Free --------------------- Bound Flow: Free --------------------- Bound

86 July 24, 200086Smart Animated Agents -- SIGGRAPH Course #24 Shape Motion Factors Four factors: Sagittal: Advancing ----------- Retreating Sagittal: Advancing ----------- Retreating Vertical: Rising ----------------- Sinking Vertical: Rising ----------------- Sinking Horizontal: Spreading ----------- Enclosing Horizontal: Spreading ----------- Enclosing Flow: Growing -------------- Shrinking Flow: Growing -------------- Shrinking Four factors: Sagittal: Advancing ----------- Retreating Sagittal: Advancing ----------- Retreating Vertical: Rising ----------------- Sinking Vertical: Rising ----------------- Sinking Horizontal: Spreading ----------- Enclosing Horizontal: Spreading ----------- Enclosing Flow: Growing -------------- Shrinking Flow: Growing -------------- Shrinking

87 July 24, 200087Smart Animated Agents -- SIGGRAPH Course #24 Kinematic Model of Effort and Shape path curvature interpolation space # of frames between keypoints velocity curve parameters path curvature interpolation space # of frames between keypoints velocity curve parameters anticipation overshoot squash & stretch breath wrist bend arm twist limb volume Designed with LMA expert guidance. Designed with LMA expert guidance. Parameters include: Parameters include:

88 July 24, 200088Smart Animated Agents -- SIGGRAPH Course #24 EMOTE (Expressive MOTion Engine) Implementation 3D animation control module using LMA Effort and Shape motion control scheme. 3D animation control module using LMA Effort and Shape motion control scheme. Spatial description as series of end-effector positions. Spatial description as series of end-effector positions. Qualitative description using Effort and Shape sliders (parameters). Qualitative description using Effort and Shape sliders (parameters). Works with inverse kinematics to generate real-time motion. Works with inverse kinematics to generate real-time motion. 3D animation control module using LMA Effort and Shape motion control scheme. 3D animation control module using LMA Effort and Shape motion control scheme. Spatial description as series of end-effector positions. Spatial description as series of end-effector positions. Qualitative description using Effort and Shape sliders (parameters). Qualitative description using Effort and Shape sliders (parameters). Works with inverse kinematics to generate real-time motion. Works with inverse kinematics to generate real-time motion.

89 July 24, 200089Smart Animated Agents -- SIGGRAPH Course #24 EMOTE Interface:

90 July 24, 200090Smart Animated Agents -- SIGGRAPH Course #24 EMOTE Interface -- Effort and Shape Sliders

91 July 24, 200091Smart Animated Agents -- SIGGRAPH Course #24 Effort Phrasing

92 July 24, 200092Smart Animated Agents -- SIGGRAPH Course #24 Shape Phrasing

93 July 24, 200093Smart Animated Agents -- SIGGRAPH Course #24 EMOTE Sampler Video Video

94 July 24, 200094Smart Animated Agents -- SIGGRAPH Course #24 Define Links between Gesture Selection and Agent Model Gesture performance cues agent state. Model individuals (specific people). Model individuals (specific people). Normal people show a variety of EMOTE parameters during gestures. Normal people show a variety of EMOTE parameters during gestures. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. Model such parameter distributions. Model such parameter distributions. Gesture performance cues agent state. Model individuals (specific people). Model individuals (specific people). Normal people show a variety of EMOTE parameters during gestures. Normal people show a variety of EMOTE parameters during gestures. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. Emotional states and some pathologies indicated by reduced spectrum of EMOTE parameters. Model such parameter distributions. Model such parameter distributions.

95 July 24, 200095Smart Animated Agents -- SIGGRAPH Course #24 Building Smart Agents Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs Introduction and Applications Introduction and Applications Smart Avatars Smart Avatars Parameterized Action Representation Parameterized Action Representation Natural Language Instructions Natural Language Instructions Automating Attention Automating Attention Agent Manner Agent Manner Building PARs Building PARs

96 July 24, 200096Smart Animated Agents -- SIGGRAPH Course #24 Towards Interactively Building Parameterized Actions Motion abstraction and mapping with spatial constraints (SIGGRAPH ‘99 Sketch: Rama Bindiganavale) Motion abstraction and mapping with spatial constraints (SIGGRAPH ‘99 Sketch: Rama Bindiganavale) Automatically abstract semantically significant points in an agent’s action into spatial and visual constraints which are then used to construct a PAR for that motion. Automatically abstract semantically significant points in an agent’s action into spatial and visual constraints which are then used to construct a PAR for that motion. Populate the Actionary from real examples. Populate the Actionary from real examples. Motion abstraction and mapping with spatial constraints (SIGGRAPH ‘99 Sketch: Rama Bindiganavale) Motion abstraction and mapping with spatial constraints (SIGGRAPH ‘99 Sketch: Rama Bindiganavale) Automatically abstract semantically significant points in an agent’s action into spatial and visual constraints which are then used to construct a PAR for that motion. Automatically abstract semantically significant points in an agent’s action into spatial and visual constraints which are then used to construct a PAR for that motion. Populate the Actionary from real examples. Populate the Actionary from real examples.

97 July 24, 200097Smart Animated Agents -- SIGGRAPH Course #24 Objectives Map PARs to agents of different anthropometric sizes while maintaining the spatial and visual constraints and similar motion style. Map PARs to agents of different anthropometric sizes while maintaining the spatial and visual constraints and similar motion style. Consider actions that involve external objects and the agent’s own body. Consider actions that involve external objects and the agent’s own body. Focus on goal achievement rather than trajectories. Focus on goal achievement rather than trajectories. Map PARs to agents of different anthropometric sizes while maintaining the spatial and visual constraints and similar motion style. Map PARs to agents of different anthropometric sizes while maintaining the spatial and visual constraints and similar motion style. Consider actions that involve external objects and the agent’s own body. Consider actions that involve external objects and the agent’s own body. Focus on goal achievement rather than trajectories. Focus on goal achievement rather than trajectories.

98 July 24, 200098Smart Animated Agents -- SIGGRAPH Course #24 Related Animation Techniques Motion re-targeting Motion re-targeting Motion signal processing Motion signal processing But we use concepts from computer vision analysis: But we use concepts from computer vision analysis: Constraint recognitionConstraint recognition – for end effectors and visual attention. Spatial proximity of tagged featuresSpatial proximity of tagged features –zero-crossings of acceleration. Motion re-targeting Motion re-targeting Motion signal processing Motion signal processing But we use concepts from computer vision analysis: But we use concepts from computer vision analysis: Constraint recognitionConstraint recognition – for end effectors and visual attention. Spatial proximity of tagged featuresSpatial proximity of tagged features –zero-crossings of acceleration.

99 July 24, 200099Smart Animated Agents -- SIGGRAPH Course #24 Method Capture motion from primary agent. Capture motion from primary agent. Find acceleration zero-crossings. Find acceleration zero-crossings. Co-occurrence of zero-crossings and spatial proximity of end-effectors with objects automatically indicate start/end of constraints. Co-occurrence of zero-crossings and spatial proximity of end-effectors with objects automatically indicate start/end of constraints. Objects can be self / fixed / mobile. Objects can be self / fixed / mobile. Capture motion from primary agent. Capture motion from primary agent. Find acceleration zero-crossings. Find acceleration zero-crossings. Co-occurrence of zero-crossings and spatial proximity of end-effectors with objects automatically indicate start/end of constraints. Co-occurrence of zero-crossings and spatial proximity of end-effectors with objects automatically indicate start/end of constraints. Objects can be self / fixed / mobile. Objects can be self / fixed / mobile.

100 July 24, 2000100Smart Animated Agents -- SIGGRAPH Course #24 “Drink from a mug”

101 July 24, 2000101Smart Animated Agents -- SIGGRAPH Course #24 Motions Stored in PAR Constraints become part of PAR description of movement. Constraints become part of PAR description of movement. Motions may be replayed on different-sized agents. Motions may be replayed on different-sized agents. Movement style and significant semantic features are preserved. Movement style and significant semantic features are preserved. – contacts – visual attention Constraints become part of PAR description of movement. Constraints become part of PAR description of movement. Motions may be replayed on different-sized agents. Motions may be replayed on different-sized agents. Movement style and significant semantic features are preserved. Movement style and significant semantic features are preserved. – contacts – visual attention

102 July 24, 2000102Smart Animated Agents -- SIGGRAPH Course #24 Motion Abstraction Video Video

103 July 24, 2000103Smart Animated Agents -- SIGGRAPH Course #24 Conclusions Instruction execution must be context-, perception-, and agent-sensitive. Instruction execution must be context-, perception-, and agent-sensitive. Cognitive Science and Movement Analysis help model human behaviors. Cognitive Science and Movement Analysis help model human behaviors. Language interfaces (through PAR) expand usability and agent building. Language interfaces (through PAR) expand usability and agent building. A new concept in dictionaries: The Actionary™ translates text into action. A new concept in dictionaries: The Actionary™ translates text into action. Instruction execution must be context-, perception-, and agent-sensitive. Instruction execution must be context-, perception-, and agent-sensitive. Cognitive Science and Movement Analysis help model human behaviors. Cognitive Science and Movement Analysis help model human behaviors. Language interfaces (through PAR) expand usability and agent building. Language interfaces (through PAR) expand usability and agent building. A new concept in dictionaries: The Actionary™ translates text into action. A new concept in dictionaries: The Actionary™ translates text into action.

104 July 24, 2000104Smart Animated Agents -- SIGGRAPH Course #24 Acknowledgments Colleagues: Martha Palmer, Aravind Joshi, Jan Allbeck, Aaron Bloomfield, MeeRan Byun, Diane Chi, Sonu Chopra, Monica Costa, Rama Bindiganavale, Charles Erignac, Ambarish Goswami, Karin Kipper, Seung-Joo Lee, Sooha Lee, Jianping Shi, Hogeun Shin, William Schuler, and Liwei Zhao. Sponsors: NSF, ONR, DARPA, NASA, ARO THANK YOU! Colleagues: Martha Palmer, Aravind Joshi, Jan Allbeck, Aaron Bloomfield, MeeRan Byun, Diane Chi, Sonu Chopra, Monica Costa, Rama Bindiganavale, Charles Erignac, Ambarish Goswami, Karin Kipper, Seung-Joo Lee, Sooha Lee, Jianping Shi, Hogeun Shin, William Schuler, and Liwei Zhao. Sponsors: NSF, ONR, DARPA, NASA, ARO THANK YOU!


Download ppt "July 24, 20001Smart Animated Agents -- SIGGRAPH Course #24 SMART ANIMATED AGENTS Norman I. Badler -- Course #24 Co-Organizer (with John Funge) Center for."

Similar presentations


Ads by Google