A Probabilistic Model for Component-Based Shape Synthesis Evangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, Vladlen Koltun Stanford University.

Slides:



Advertisements
Similar presentations
Bayesian network for gene regulatory network construction
Advertisements

A Tutorial on Learning with Bayesian Networks
Probabilistic models Jouni Tuomisto THL. Outline Deterministic models with probabilistic parameters Hierarchical Bayesian models Bayesian belief nets.
ECE 8443 – Pattern Recognition LECTURE 05: MAXIMUM LIKELIHOOD ESTIMATION Objectives: Discrete Features Maximum Likelihood Resources: D.H.S: Chapter 3 (Part.
SA-1 Probabilistic Robotics Planning and Control: Partially Observable Markov Decision Processes.
K Means Clustering , Nearest Cluster and Gaussian Mixture
Supervised Learning Recap
3D Shape Representation Tianqiang 04/01/2014. Image/video understanding Content creation Why do we need 3D shapes?
Introduction of Probabilistic Reasoning and Bayesian Networks
Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data John Lafferty Andrew McCallum Fernando Pereira.
A Probabilistic Model for Component-Based Shape Synthesis Evangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, Vladlen Koltun Stanford University.
EE462 MLCV Lecture Introduction of Graphical Models Markov Random Fields Segmentation Tae-Kyun Kim 1.
Belief Propagation by Jakob Metzler. Outline Motivation Pearl’s BP Algorithm Turbo Codes Generalized Belief Propagation Free Energies.
From: Probabilistic Methods for Bioinformatics - With an Introduction to Bayesian Networks By: Rich Neapolitan.
Integrating Bayesian Networks and Simpson’s Paradox in Data Mining Alex Freitas University of Kent Ken McGarry University of Sunderland.
Overview Full Bayesian Learning MAP learning
Date:2011/06/08 吳昕澧 BOA: The Bayesian Optimization Algorithm.
Regulatory Network (Part II) 11/05/07. Methods Linear –PCA (Raychaudhuri et al. 2000) –NIR (Gardner et al. 2003) Nonlinear –Bayesian network (Friedman.
1 Learning Entity Specific Models Stefan Niculescu Carnegie Mellon University November, 2003.
Switch to Top-down Top-down or move-to-nearest Partition documents into ‘k’ clusters Two variants “Hard” (0/1) assignment of documents to clusters “soft”
1 gR2002 Peter Spirtes Carnegie Mellon University.
Today Logistic Regression Decision Trees Redux Graphical Models
Cristina Manfredotti D.I.S.Co. Università di Milano - Bicocca An Introduction to the Use of Bayesian Network to Analyze Gene Expression Data Cristina Manfredotti.
Review Rong Jin. Comparison of Different Classification Models  The goal of all classifiers Predicating class label y for an input x Estimate p(y|x)
Learning In Bayesian Networks. Learning Problem Set of random variables X = {W, X, Y, Z, …} Training set D = { x 1, x 2, …, x N }  Each observation specifies.
Machine Learning CUNY Graduate Center Lecture 21: Graphical Models.
Dependency networks Sushmita Roy BMI/CS 576 Nov 26 th, 2013.
Made by: Maor Levy, Temple University  Probability expresses uncertainty.  Pervasive in all of Artificial Intelligence  Machine learning 
Bayesian Networks 4 th, December 2009 Presented by Kwak, Nam-ju The slides are based on, 2nd ed., written by Ian H. Witten & Eibe Frank. Images and Materials.
Learning Structure in Bayes Nets (Typically also learn CPTs here) Given the set of random variables (features), the space of all possible networks.
Bayesian parameter estimation in cosmology with Population Monte Carlo By Darell Moodley (UKZN) Supervisor: Prof. K Moodley (UKZN) SKA Postgraduate conference,
Continuous Random Variables
A Comparison Between Bayesian Networks and Generalized Linear Models in the Indoor/Outdoor Scene Classification Problem.
Learning Lateral Connections between Hidden Units Geoffrey Hinton University of Toronto in collaboration with Kejie Bao University of Toronto.
Aprendizagem Computacional Gladys Castillo, UA Bayesian Networks Classifiers Gladys Castillo University of Aveiro.
Unsupervised Learning: Clustering Some material adapted from slides by Andrew Moore, CMU. Visit for
Handover and Tracking in a Camera Network Presented by Dima Gershovich.
CS 782 – Machine Learning Lecture 4 Linear Models for Classification  Probabilistic generative models  Probabilistic discriminative models.
Module networks Sushmita Roy BMI/CS 576 Nov 18 th & 20th, 2014.
Graph-based Text Classification: Learn from Your Neighbors Ralitsa Angelova , Gerhard Weikum : Max Planck Institute for Informatics Stuhlsatzenhausweg.
Ch 8. Graphical Models Pattern Recognition and Machine Learning, C. M. Bishop, Revised by M.-O. Heo Summarized by J.W. Nam Biointelligence Laboratory,
Course files
Learning the Structure of Related Tasks Presented by Lihan He Machine Learning Reading Group Duke University 02/03/2006 A. Niculescu-Mizil, R. Caruana.
1 Topic 5 - Joint distributions and the CLT Joint distributions –Calculation of probabilities, mean and variance –Expectations of functions based on joint.
Slides for “Data Mining” by I. H. Witten and E. Frank.
1 CONTEXT DEPENDENT CLASSIFICATION  Remember: Bayes rule  Here: The class to which a feature vector belongs depends on:  Its own value  The values.
Marginalization & Conditioning Marginalization (summing out): for any sets of variables Y and Z: Conditioning(variant of marginalization):
Learning In Bayesian Networks. General Learning Problem Set of random variables X = {X 1, X 2, X 3, X 4, …} Training set D = { X (1), X (2), …, X (N)
Computer Vision Lecture 6. Probabilistic Methods in Segmentation.
Multi-Speaker Modeling with Shared Prior Distributions and Model Structures for Bayesian Speech Synthesis Kei Hashimoto, Yoshihiko Nankaku, and Keiichi.
Lecture 2: Statistical learning primer for biologists
Flat clustering approaches
John Lafferty Andrew McCallum Fernando Pereira
Learning and Acting with Bayes Nets Chapter 20.. Page 2 === A Network and a Training Data.
Maximum Entropy Model, Bayesian Networks, HMM, Markov Random Fields, (Hidden/Segmental) Conditional Random Fields.
Sporadic model building for efficiency enhancement of the hierarchical BOA Genetic Programming and Evolvable Machines (2008) 9: Martin Pelikan, Kumara.
Statistical Models for Automatic Speech Recognition Lukáš Burget.
Introduction on Graphic Models
Bayesian Hierarchical Clustering Paper by K. Heller and Z. Ghahramani ICML 2005 Presented by David Williams Paper Discussion Group ( )
Probabilistic Reasoning Inference and Relational Bayesian Networks.
Inferring Regulatory Networks from Gene Expression Data BMI/CS 776 Mark Craven April 2002.
Lecture 1.31 Criteria for optimal reception of radio signals.
Qian Liu CSE spring University of Pennsylvania
Statistical Models for Automatic Speech Recognition
Bayesian Models in Machine Learning
Efficient Learning using Constrained Sufficient Statistics
CSCI 5822 Probabilistic Models of Human and Machine Learning
Text Categorization Berlin Chen 2003 Reference:
Discriminative Probabilistic Models for Relational Data
HKN ECE 313 Exam 2 Review Session
Presentation transcript:

A Probabilistic Model for Component-Based Shape Synthesis Evangelos Kalogerakis, Siddhartha Chaudhuri, Daphne Koller, Vladlen Koltun Stanford University

SP2-2 Bayesian networks Directed acyclic graph (DAG) – Nodes – random variables – Edges – direct influence (“causation”) X i ? X ancestors | X parents e.g., C ? {R,B,E} | A Simplifies chain rule by using conditional independencies EarthquakeBurglary AlarmRadio Pearl, 1988 Call

Goal A tool that automatically synthesizes a variety of new, distinct shapes from a given domain.

Sailing ships vary in: - Size - Type of hull, keel and mast - The number and configuration of masts. Various geometric, stylistic and functional relationships influence the selection and placement of individual components to ensure that the final shape forms a coherent whole.

Probabilistic Reasoning for Assembly- Based 3D Modeling Siddhartha Chaudhuri, Evangelos Kalogerakis, Leonidas Guibas, and Vladlen Koltun ACM Transactions on Graphics 30(4) (Proc. SIGGRAPH), 2011

The probabilistic model is flat!!!! It describes the relationship among components, but it does NOT tells us how these components form the whole structure.

Offline Learning Online shape synthesis

The input shapes have two component categories: Legs and tabletops learns four-legged tablesone-legged tables # tabletop: 1 # leg: 1 # tabletop: 1 or 2 # leg: 4 Offline Learning

Synthesize one-leg tables Step 1: synthesize a set of components (1 leg & 1 tabletop) … Step 2: optimize component placement …

The model structure R - shape style S = {Sl} - component style per category l N = {Nl} - number of components from category l. C = {Cl} - continuous geometric feature vector for components from category l. ( curvature histograms, shape diameter histograms, scale parameters, spin images, PCA-based descriptors, and lightfield descriptors ) D = {Dl} - discrete geometric feature vector for components from category l. ( encode adjacency information.)

For 4-legged table: N top =1 or 2; N leg =4; S top =rectangular tabletops S leg =narrow column-like legs For 1-legged table N top =1 N leg =1 S top =roughly circular tabletops S leg =split legs

Learning The input: – A set of K compatibly segmented shapes. – For each component, we compute its geometric attributes. – The training data is thus a set of feature vectors: – O = {O 1, O 2,...,O k }, where O k = {N k,D k,C k }. The goal – learn the structure of the model (domain sizes of latent variables and lateral edges between observed variables) and the parameters of all CPDs in the model.

The desired structure G is the one that has highest probability given input data O [Koller and Friedman 2009]. By Bayes’ rule, this probability can be expressed as Max P(G|O) -- > Max P (O | G)

Assume prior distributions over the parameters Θ of the model. parameter priors summing over all possible assignments to the latent variables R and S: the number of integrals is exponentially large !!!

To make the learning procedure computationally tractable, they use an effective approximation of the marginal likelihood known as the Cheeseman-Stutz score [Cheeseman and Stutz 1996]: a fictitious dataset that comprises the training data O and approximate statistics for the values of the latent variables. the parameters estimated for a given G The score defines a metric to measure how good a model is. The goal is to search a G maximize the score!

What does the G mean? The number of table styles (R) Whether a category of components belongs to a specific style? What is the number? (S)

Greedy Structure search Initially, set the domain size of 1 for R (a single shape style). for each category l, – Set the component style as 1, compute the score, then 2, 3, …, stop when the score decreases. The local maximal value is the style number of l. Move the next category. After the search iterates over all variables in S, increase the domain size of R and repeat the procedure. terminates when the score reaches a local maximum that does not improve over 10 subsequent iterations;

Domain size of R =1 All tables belong to the same style. For leg: – Compute the score for case 1: all legs are of the same style; – Compute the score for case 2: narrow column-like legs and split legs. – Compute the score for case 3: three styles of legs. Score decreases so stop. For table-top: – …

CPT of R 1 2 5/127/12 CPT of S top (R=1) CPT of S top (R=2) CPT of S leg (R=1) CPT of S leg (R=2) …

Shape Synthesis Step 1: Synthesizing a set of components 1-legged or 4-legged Rect or circular? column-like or split Pruning: Branches that contain assignments that have extremely low probability density

Shape Synthesis Step 1: Synthesizing a set of components Step 2: Optimizing component placement “slots” specify where this component can be attached to other components.

Shape Synthesis Step 1: Synthesizing a set of components Step 2: Optimizing component placement penalizes discrepancies of position and relative size between each pair of adjacent slots

Shape Synthesis Step 1: Synthesizing a set of components Step 2: Optimizing component placement

Application: Shape database amplification synthesize all instantiations of the model that have non-negligible probability – identify and reject instantiations that are very similar to shapes in the input dataset or to previous instantiations. (by measuring the feature vectors of corresponding components)

Application: Constrained shape synthesis Give partial assignments to constrained random variables assume values only from the range corresponding to the specified constraints. 4-leg split

Results Learning took about 0.5 hours for construction vehicles, 3 hours for creatures, 8 hours for chairs, 20 hours for planes, and 70 hours for ships. For shape synthesis, enumerating all possible instantiations of a learned model takes less than an hour in all cases, and final assembly of each shape takes a few seconds.

Can it generate models like below? or The probability should be very low. or

Inspiration- Variability vs plausibility To maintain plausibility: should be similar to the existing ones; To increase variability: should be as different as possible from the existing ones. This work is good for maintaining plausibility but the variability seems low. How to pursue large variability while maintaining plausibility? or

Topic? Generating shape variation by variability transfer learn the varying model in the dataset rather than the shape model. Use the varying model to synthesize new shape in another dataset.

Topic? function-preserved shape synthesize The function of a component is not taken into account in the current model… By considering function, we can create variations with high dissimilarity on geometric looking while preserve the function.