Presentation is loading. Please wait.

Presentation is loading. Please wait.

Modeling the Shape of People from 3D Range Scans

Similar presentations


Presentation on theme: "Modeling the Shape of People from 3D Range Scans"— Presentation transcript:

1 Modeling the Shape of People from 3D Range Scans
Dragomir Anguelov AI Lab Stanford University Joint work with Praveen Srinivasan, Hoi-Cheung Pang, Daphne Koller, Sebastian Thrun, James Davis

2 The Dataset 70 scans Cyberware Scans Problems 48 scans 4 views,
~125k polygons ~65k points each Problems Missing surface Drastic pose changes 48 scans

3 Modeling Human Shape Pose variation Body-shape variation
Want to model shape variation, both for a single object as its pose changes, and for different objects within the same class. Body-shape variation

4 Space of Human Shapes Movie
[scape movie]

5 Talk outline Data processing pipeline
Registration Recovering an articulated skeleton Modeling the space of human deformations Pose deformations Body shape deformations Shape completion Partial view completion Animating motion capture sequences

6 Talk outline Data processing pipeline
Registration Recovering an articulated skeleton Modeling the space of human deformations Pose deformations Body shape deformations Shape completion Partial view completion Animating motion capture sequences

7 Data Processing Pipeline

8 Registration - CC Algorithm [Anguelov et al. 2004]
X Z Correlated Correspondence Algorithm Computes an embedding of mesh Z into mesh X 1. Defines a discrete Markov Net M over correspondence variables of mesh Z Markov Net potentials enforce Minimal surface deformation Similar local surface appearance Preservation of geodesic distance 2. Embedding of Z into X is computed by performing Loopy Belief Propagation on M Input: Pair of scans Output: Correspondences X Z Related work: [Huttenlocher et al 00] [Coughlan 02]

9 Results: Human poses dataset
Model Cyberware scans Registrations 4 markers were used on each scan to avoid the need for multiple initializations of Loopy-BP

10 Recovering articulation
Input: models, correspondences Output: rigid parts, skeleton

11 Recovering articulation [Anguelov et al’04]
1 2 3 4 Stages of the process Register meshes using Correlated Correspondences algorithm Initialize break template surface into N arbitrary components Cluster surface into rigid parts Estimate joints Related work: [Cheung et al’03]

12 Probabilistic Generative Model
x1 aN xN Part labels Points Transformations T y1 yN Transformed Model z1 b1 Instance Point corrs Points zK bK We want to solve for the part labels A and the part transformations T given all other parameters.

13 Contiguity Prior a1 a2 a3 Parts are preferably contiguous regions
Adjacent points on the surface should have similar labels Enforce this with a Markov network: We are performing collective clustering in which we enforce a preference that adjacent points on the surface have similar labels. a1 a2 a3

14 * [Greig et al. 89], [Kolmogorov & Zabih 02]
Clustering algorithm Objective Contiguity prior Data Likelihood Algorithm Given transformations , perform min-cut* inference to get Given labels , solve for rigid transformations * [Greig et al. 89], [Kolmogorov & Zabih 02]

15 Results: Puppet articulation
This results completely beats the state of the art: we are able to recover 15 parts (14 joints) correctly from just 7 puppet examples. Current articulated model recovery techniques [Cheung et al ’03] recover at most one joint at a time. The process goes as follows: the human subject first moves the arm at the shoulder only and the shoulder joint gets recovered. In a second sequence the subject moves the arm at the elbow only (while holding everything else still), at which point the elbow gets recovered. To recover 14 joints you need 14 such sequences which then are combined. We can deal automatically with any number of joints given a very small number (7 in this example) of training examples.

16 Results: Arm articulation
While the puppet is mostly rigid, here we demonstrate we can recover accurately joints of objects which are undergoing nontrivial deformation.

17 Results: 50 scans of a human
Tree-shaped skeleton found Rigid parts found This results completely beats the state of the art: we are able to recover 15 parts (14 joints) correctly from just 7 puppet examples. Current articulated model recovery techniques [Cheung et al ’03] recover at most one joint at a time. The process goes as follows: the human subject first moves the arm at the shoulder only and the shoulder joint gets recovered. In a second sequence the subject moves the arm at the elbow only (while holding everything else still), at which point the elbow gets recovered. To recover 14 joints you need 14 such sequences which then are combined. We can deal automatically with any number of joints given a very small number (7 in this example) of training examples.

18 Talk outline Data processing pipeline
Registration Recovering an articulated skeleton Modeling the space of human deformations Pose deformations Body shape deformations Shape completion Partial view completion Animating motion capture sequences

19 Modeling Pose Deformation
input Joint angles Deformations output Regression function Want to model shape variation, both for a single object as its pose changes, and for different objects within the same class.

20 Modeling Pose Deformation
Predict independently for each triangle Reconstruct complete shape template Related work: [Allen et al ‘03][Sumner+Popovic 2004]

21 Representation of pose deformation
Rigid articulated Given estimates of R, Q, synthesizing the shape is straightforward :

22 Learning pose deformation
For each polygon, predict entries of from rotations of nearest 2 joints (represented as twists ). Linear regression parameters : Obtaining values of in the first place:

23 Twists and exponential maps
From twist to rotation matrix Joint angles

24 Pose deformation space

25 Learning body-shape deformation
Include also change in shape due to different people: Do PCA over body-shape matrices : Getting estimates of :

26 PCA over body shape

27 Combining pose and body shape spaces

28 Talk outline Data processing pipeline
Registration Recovering an articulated skeleton Modeling the space of human deformations Pose deformations Body shape deformations Shape completion Partial view completion Animating motion capture sequences

29 Shape completion Find surface Y from our space which matches a set of markers Z Y[Z] : completed mesh deforms out of space spanned by , R to match Z Y’[Z]: predicted mesh constrained to be in space spanned by , R Target optimized by iteratively solving for , R orY while holding the others fixed

30 Partial view completion
Process: Add a few markers (~6-8) Run CC algorithm to get > 100 markers Optimize to find Y[Z]

31 Shape completion from motion capture data

32 Conclusions Presented a data-driven method of modeling human deformations induced by Pose Body shape Extending the model Nonlinear prediction of pose deformation (ongoing) Shape complete original scans using current model Acquire and learn from the entire pose-bodyshape matrix Prior on likely joint angles, e.g. [Popovic + et al ’04] Enforce temporal consistency in tracking applications Extending the possible applications Markerless motion capture (shape completion in shape-from-silhouette data) Modeling other beasts


Download ppt "Modeling the Shape of People from 3D Range Scans"

Similar presentations


Ads by Google