Presentation is loading. Please wait.

Presentation is loading. Please wait.

Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768.

Similar presentations


Presentation on theme: "Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768."— Presentation transcript:

1 Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768

2 Motivation  Motion of virtual character is prevalent in:  Game  Movie (visual effect)  Virtual reality  And more… FIFA 2006 (EA) NaturalMotion endorphin

3 Motivation What virtual characters should be able to do: 1.Lots of behaviors - leaping, grasping, moving, looking, attacking 2.Exhibit personality - move “sneakily” or “aggressively” 3.Awareness of environment - balance/posture adjustments 4.Physical force-induced movements (jumping, falling, swinging)

4 Outline  Motion generation techniques Motion capture and key-framing Data-driven synthesis Physical-based animation Hybrid approach  Dynamic motion controllers Quick Ragdoll Introduction Controllers  Transitioning between simulation and motion data Motion search – When and where Simulation-driven transition - How

5 Mocap and Key-framing (+) Captures style and subtle nuances (+) Absolute control “wyciwyg” (-) Difficult to adapt, edit, reuse (-) Not physically dynamic, especially highly dynamic motion

6 Data-driven synthesis  Generate motion from examples Blending, displacement map Kinematic controller built upon existing data Optimization / learning statistical model (+) creators retain control Creators define all rules for movement (-) violates the “checks and balances” of motion Motion control abuses its power over physics (-) limits emergent behavior

7 Physical-based animation  Ragdoll simulation  Dynamic controllers (+) Interacts well with environment (-) “Ragdoll” movement is lifeless (-) Difficult to develop complex behaviors

8 Hybrid approaches Mocap Stylistic realism Physical simulation Physical realism Hybrid approaches: Combine the best of both approaches  Activate either one when most appropriate  Add life to ragdolls using control systems (only simulate behaviors that are manageable)

9 A high-level example

10 Outline  Motion generation techniques Motion capture and key-framing Data-driven synthesis Physical-based animation Hybrid approach  Dynamic motion controllers Quick Ragdoll Introduction Controllers  Transitioning between simulation and motion data Motion search – When and Where Simulation-driven transition - How

11 Overview of dynamic controller  Decision making: objectives, current state (x[t]) → desired motion (x d [t])  Motion Control: desired motion (x d [t]), current state (x[t]) → motor forces (u[t])  Physics: current state (x[t]) → next state (x[t+1]) Physics Motion Control Decision Making objectives x[t+1] u[t]x d [t] u[t]=MC(x d [t]-x[t])x[t+1]=P(x[t],u[t])x d [t]=Goal(x[t])

12 Physics: setting up ragdolls  Given a dynamics engine Set primitive for each body part Mass and inertial properties Create 1, 2, or 3-DOF joints between parts Set joint limit constraints for each joint External forces (gravity, impact etc)  Dynamics Engine Supplies Updated positions/orientations Collision resolution with world Motion Control Decision Making objectives x[t+1] u[t]x d [t] u[t]=MC(x d [t]-x[t])x d [t]=Goal(x[t])

13 Controller types  Basic Joint-torque Controller Low-level control Sparse Pose control (May be specified by artist) Continuous control (e.g.: Tracking mocap data)  Hierarchical Controller Layered controllers Higher level controller determines correct desired value for low level Derived from sensor or state info, support polygon, center of mass, body contacts, etc.

14 Joint-torque controller Proportional-Derivative (PD servo) Controller  Actuate each joint towards desired target:  Acts like a damped spring attached to joint (rest position at desired angle) θ des is desired joint angle and θ is current angle k s and k d are spring and damper gains

15 Live demo Created with http://www.ode.org

16 Outline  Motion generation techniques Motion capture and key-framing Data-driven synthesis Physical-based animation Hybrid approach  Dynamic motion controllers Quick Ragdoll Introduction Controllers  Transitioning between simulation and motion data Motion search – When and where Simulation-driven transition - How

17 Simulating falling and recovering behavior [Mandel 2004]

18 Transitioning between Techniques  Motion data  Simulation When: Significant external forces applied on a virtual character How: Just initialize simulation with pose and velocities extracted from motion data.  Simulation  Motion data When and where: some appropriate pose is reached (hard to decide); Motion frame closest to simulated pose. How: Drive simulation toward matched motion data using PD controller.

19 Motion state spaces  State space of data-driven technique: Any pose present in the motion database  State space of dynamics-based technique: Set of poses allowable by physical constraints  The latter is larger because it: can produce motion difficult to animate or capture includes large set of unnatural poses  Correspondence must be made to allow transitions between the two

20 Motion searching  Problem: Find nearest matches in the motion database to the current simulated motion. Approach: 1.Data representation Joint position 2.Process into spatial data structure kd-tree/bbd-tree (box-decomposition) 3.Search structure at runtime Query pose comes from simulation Nearest neighbor search (ANN)

21 Data Representation: Joint Positions  Need representation that allows numerical comparison of body posture  Joint angles not as discriminating as joint positions Ignore root translation and align about vertical axis May also want to include joint velocities  Joint velocity is considered by taking surrounding frames into distance computation

22 Distance metric J – Number of joints W j – Joint weight p – global position of joint T - Transformation to align the first frame OriginalJoint positionsAligned positions

23 Searching process  Approximate Nearest Neighbor (ANN) Search First finds the cell containing the query point in spatial data structure of the input data points. A randomized search then finds surrounding cells containing points within the given ε threshold distance from actual nearest neighbors. Results guaranteed to be within a factor (1+ε) distance of actual nearest neighbors. O(log n 3 ) expected run time and O(nlogn) space requirement  Much better in practice than KNN as dimensionality of points increases

24 Speeding up search  Curse of dimensionality  Search Each Joint Position Separately  Pair more joints together to increase accuracy n 3-DOF searches is faster than one 3n-DOF search...

25 Simulating behavior  Model reaction to impacts causing loss of balance  Two controllers handle before and after contact phases respectively  Ensure transitioning to a balanced posture in motion data

26 Fall controller  Aim: produce biomechanically inspired, protective behaviors in response to the many different ways a human may fall to the ground.

27 Fall controller  Continuous control strategy 4 controller states according to falling direction: backward, forward, right, left During each state one or both arms are controlled to track predicted landing position of the shoulders Goal of the controlled arm is to have wrists intersect the line between the shoulder and its predicted landing position. A small natural bend is added to the elbow and the desired angles for the rest of the body are set to initial angles at the time the fall controller is activated.

28 Fall controller  Determine controller state θ is the facing direction of the character. V is the average velocity of the limbs.

29 Fall controller  Determine target shoulder joint angle Can change when simulation steps forward The k s and k d are properly tuned

30 Settle controller  Aim: Driving the character to similar motion clip at an appropriate time  Beginning when hands impact the ground.  Two states Absorb impact:  gains are adjusted to reduce hip and upper body velocity.  Last a half second before next state. ANN search:  Find a frame in motion database that is close to currently simulated posture  Use found frame as target while continuing to absorb impact  Simulated motion is smoothly blended into motion data. Final results demo

31 An alternative on response motion synthesis [Zordan 2005] Problem: Generating dynamic response motion to external impact Insight:  Dynamics is often only needed for a short time (a burst).  After that, the utility of the dynamics decreases and due to the lack of good behavior control  Return to mocap once the character becomes “conscious” again

32 Generating dynamic response motion 1. Transition to simulation when impact takes place 2. Search motion data for transition-to sequence similar with simulated response motion 3. Run the second simulation with joint- torque controller actuating the character toward matching motion 4. Final blending to eliminate the discontinuity between simulated and transition-to motions

33 Motion selection  Aim: to find a transition-to motion  Frame windows are compared between simulation and motion data Frames are aligned so that roots position and orientation of start frame in each window coincide Distance between and : p b, θ b : body part position and orientation w i : window weight, quadratic function with highest value at start frame and decreasing for subsequent frames w pb, w θb : linear and angular distance scale for each body part

34 Transition motion synthesis  Aim: generate the motion to fill the gap between the beginning of interaction and found motion data  Realized in 2 steps: Run a second simulation to track the intermediate sequence Blend the physically generated motion into transition-to motion data

35 Transition motion synthesis  Simulation 2 An inertia-scaled PD-servo is used to compute torque at each joint The tracked sequence is generated by blend start and end frames using SLERP with an ease-in/ease-out. A deliberate delay in tracking is introduced to make the reaction realistic

36 Conclusion  Hybrid approaches Complex dynamic behaviors are hard to model physically A viable option to synthesize character motion under wider range of situations Able to incorporate unpredictable interactions, especially in game  Making it more practical Automatic computation of motion controller parameters [Allen 2007] Speeding up search via pre-learned model [Zordan 2007]

37 References  MANDEL, M., 2004. Versatile and interactive virtual humans: Hybrid use of data-driven and dynamics-based motion synthesis. Master's Thesis, Carnegie Mellon University.  ZORDAN V. B., MAJKOWSKA A., CHIU B., FAST M.: Dynamic response for motion capture animation. ACM Trans. Graph. 24, 3 (2005), 697.701.  B. Allen, D. Chu, A. Shapiro, P. Faloutsos. On the Beat! Timing and Tension for Dyanmic Characters, ACM SIGGRAPH/Eurographics Symposium on Computer Animation 2007  Zordan, V.B., Macchietto, A., Medina, J., Soriano, M., Wu C.C., Interactive Dynamic Response for Games, ACM SIGGRAPH Sandbox Symposium 2007


Download ppt "Versatile Human Behavior Generation via Dynamic, Data- Driven control Tao Yu COMP 768."

Similar presentations


Ads by Google