Presentation is loading. Please wait.

Presentation is loading. Please wait.

Authoring Directed Gaze for Full-Body Motion Capture

Similar presentations


Presentation on theme: "Authoring Directed Gaze for Full-Body Motion Capture"— Presentation transcript:

1 Authoring Directed Gaze for Full-Body Motion Capture
Roald Melssen Matej Milinkovic Tomislav Pejsa Daniel Rakita Bilge Mutlu Michael Gleicher Matej

2 Table of Contents Introduction Related work Implementation Evaluation
Limitations & Future Work Discussion Matej

3 Introduction Directed Gaze - Movement of line of sight
Shows focus, personality and intent Involves movement of the eyes, head and torso Usually done by hand The Paper’s Goal - automatically adding editable directed gaze to a captured motion. Matej

4 Related Work Gaze Synthesis: - focus on Rapid eyeball movements (Deng et al. [2005];Lee at al.[2002]), or synthesize coordinated movements of the eyes, head and body toward targets (Lence et al. [2010];Heck [2007]) Peters et al. [2010] and Pejsa et al. [2015] introduce procedural models inspired by neurophysiological observations These methods are extended Designed to provide biologically plausible movements across the parametric range Matej

5 Related Work Gaze Inference: - Describes methods that analytically determine when and where the character should look Research on gaze control by Henderson [2003], has shown that people’s gaze is influenced by two mechanics: The spontaneous bottom-up attention and the deliberate top-down attention The paper focuses on bottom-up attention which determines the gaze using a contrast, orientation and motion[Peters and O’Sullivan 2003; Peters and Itti 2008] In other work gaze targets are determined from spatial and kinematic properties of scene entities, such as proximity, velocity and orientation [Cafaro et al. 2009; Grillon and Thalmann 2009; Kokkinara et al. 2011] Matej

6 Implementation Gaze Inference Gaze Synthesis Gaze Editing
Determining gaze instances from mocap data, which consist of a duration and look-at target Gaze Synthesis Adjusting motion to gaze at target Based on inverse kinematics Gaze Editing Allow user control over generated gaze instances Feeds back into step 2 after edits are done Roald

7 Implementation: Gaze Inference
A gaze instance is a tuple: is the gaze shift start frame is the fixation start frame is the fixation end frame is the gaze target (3D vector) is the head alignment parameter is the torso alignment parameter where... Roald

8 Implementation: Gaze Inference
Input: motion capture data Output: a list of gaze instances That means, construct a list of tuples of Algorithm: Gaze Instance Inference Determine timing: Gaze Target Inference Determine target: Computing Alignment Parameters Determine head and torso alignment: G1 G2 G3 G4 Roald

9 Implementation: Gaze Instance Inference
Need to calculate timing parameters: Idea: analyze joint angular velocities When someone gazes at a point, the body accelerates and decelerates The maximum acceleration is attained at the midpoint of the gaze event Angular acceleration of joint j at frame f is Normalized to range: Probability of a significant gaze event occuring at frame f is J is the set of all joints is a weight for joint j which determines how important it is Higher weight for joints closer to the eye Roald

10 Implementation: Gaze Instance Inference
Find frame that maximizes This frame likely contains a significant gaze event (a gaze start or gaze end) Thus, the interval between two significant gaze events is classified as a gaze instance G1 G2 G3 G4 Roald significant gaze events

11 Implementation: Gaze Target Inference
We now know the timing of the gaze instance, defined by Now we want to find the look-at target Three heuristics: Character likely looks at point along movement direction of the head Character likely looks at important objects Character likely looks at objects just before picking them up or touching them Idea: build a 2D probability distribution of the eye’s view and pick as the target with the highest probability = directional term, = importance term, = hand-contact term Roald

12 Implementation: Gaze Target Inference

13 Implementation: Computing Alignment Parameters
Only parameters left to compute: Idea: Project end rotation on the arc from the start rotation to the rotation that would fully align the head to the look-at target t and then determine arc ratio is the torso rotation at the gaze start is the torso rotation at the gaze end is the rotation that would fully align the torso with the target is the rotation that corresponds to no alignment at all Projecting on the arc between and gives us Finally, we calculate

14 Evaluation Authoring Effort is compared by measuring two metrics: Time taken and Number of keys set The average was 25 minutes and 86 keys for each minute of animation The computation time per minute of animation was 1.5 minutes Second comparison: effort required to edit gaze animation in a scene that already contained eye movements The experienced animator used MotionBuilder and a novice animator used their tool, the novice animator took ⅓ of the time and number of operations compared to the experienced animator Matej

15 Evaluation Animation Quality: ( 1 ) No gaze, ( 2 ) Recorded gaze (using eye tracker), ( 3 ) Hand-authored gaze and ( 4 ) synthesized gaze (Approach in the paper) Hypothesis: ( 1 ) synthesized gaze would be preferred over no gaze; ( 2 ) synthesized gaze would be preferred over recorded gaze; and ( 3 ) synthesized gaze would be seen as non-inferior to hand-authored gaze. Design: three separate studies, each consisted of five task trials and followed a within-participants design where participants were asked to choose between videos with each gaze, presented in a randomized order Matej

16 Evaluation Stimuli: included 5 x 4 video clips (five scenes with each condition), 9 to 14 seconds in length Scenes included ChatWithFriends, MakeSandwich, StackBoxes, StealGem and WalkCones Measures: ( 1 ) animator competence ( 2 )realism ( 3 ) communicative clarity Matej

17 Evaluation Results: The data from first study support hypothesis 1
The data from second study showed that synthesized gaze was not significantly preferred over recorded gaze The third study showed that synthesized gaze was not seen as non-inferior to hand-authored gaze The scene type had a significant effect on participants choice of recorded gaze Matej

18 Evaluation Results: The study shows that adding eye animation using the approach from the paper leads to improvement in the perceived quality of the animation over having no eye animation at all There is a small loss in quality compared to expert-crafted animations (3:00) Matej

19 Limitations & Future Work
Focus on modeling only directed gaze Improve on the gaze’s accuracy and sensitivity The current implementation of the system is in Unity Integration with commonly used 3D-animation software such as MotionBuilder or Maya Matej

20 Questions?

21 Discussion Is this technique actually useful for animators, or is it one of those techniques where researchers think it is useful, but in practice it’s not? We think it’s mostly useful for novice animators or short movies (kids shows) Roald


Download ppt "Authoring Directed Gaze for Full-Body Motion Capture"

Similar presentations


Ads by Google