L1 sparse reconstruction of sharp point set surfaces

Slides:



Advertisements
Similar presentations
A Robust Super Resolution Method for Images of 3D Scenes Pablo L. Sala Department of Computer Science University of Toronto.
Advertisements

The fundamental matrix F
Edge Preserving Image Restoration using L1 norm
Principal Component Analysis Based on L1-Norm Maximization Nojun Kwak IEEE Transactions on Pattern Analysis and Machine Intelligence, 2008.
Medical Image Registration Kumar Rajamani. Registration Spatial transform that maps points from one image to corresponding points in another image.
Developable Surface Fitting to Point Clouds Martin Peternell Computer Aided Geometric Design 21(2004) Reporter: Xingwang Zhang June 19, 2005.
Surface Reconstruction From Unorganized Point Sets
Least-squares Meshes Olga Sorkine and Daniel Cohen-Or Tel-Aviv University SMI 2004.
KinectFusion: Real-Time Dense Surface Mapping and Tracking
Graph Laplacian Regularization for Large-Scale Semidefinite Programming Kilian Weinberger et al. NIPS 2006 presented by Aggeliki Tsoli.
1 Micha Feigin, Danny Feldman, Nir Sochen
Smoothing 3D Meshes using Markov Random Fields
Shape from Contours and Multiple Stereo A Hierarchical, Mesh-Based Approach Hendrik Kück, Wolfgang Heidrich, Christian Vogelgsang.
INFORMATIK Differential Coordinates for Interactive Mesh Editing Yaron Lipman Olga Sorkine Daniel Cohen-Or David Levin Tel-Aviv University Christian Rössl.
Silhouettes in Multiview Stereo Ian Simon. Multiview Stereo Problem Input: – a collection of images of a rigid object (or scene) – camera parameters for.
CS CS 175 – Week 2 Processing Point Clouds Local Surface Properties, Moving Least Squares.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Niloy J. Mitra1, Natasha Gelfand1, Helmut Pottmann2, Leonidas J
Optimal Bandwidth Selection for MLS Surfaces
Linear Discriminant Functions Chapter 5 (Duda et al.)
1 Numerical geometry of non-rigid shapes Non-Euclidean Embedding Non-Euclidean Embedding Lecture 6 © Alexander & Michael Bronstein tosca.cs.technion.ac.il/book.
Visualization and graphics research group CIPIC January 21, 2003Multiresolution (ECS 289L) - Winter Surface Simplification Using Quadric Error Metrics.
Robust Statistical Estimation of Curvature on Discretized Surfaces Evangelos Kalogerakis Patricio Simari Derek Nowrouzezahrai Karan Singh Symposium on.
Motion from normal flow. Optical flow difficulties The aperture problemDepth discontinuities.
CSE554SimplificationSlide 1 CSE 554 Lecture 7: Simplification Fall 2014.
Normal Estimation in Point Clouds 2D/3D Shape Manipulation, 3D Printing March 13, 2013 Slides from Olga Sorkine.
Sparsity-Aware Adaptive Algorithms Based on Alternating Optimization and Shrinkage Rodrigo C. de Lamare* + and Raimundo Sampaio-Neto * + Communications.
Nonlinear Dimensionality Reduction Approaches. Dimensionality Reduction The goal: The meaningful low-dimensional structures hidden in their high-dimensional.
Image Renaissance Using Discrete Optimization Cédric AllèneNikos Paragios ENPC – CERTIS ESIEE – A²SI ECP - MAS France.
Manifold learning: Locally Linear Embedding Jieping Ye Department of Computer Science and Engineering Arizona State University
Evolving Curves/Surfaces for Geometric Reconstruction and Image Segmentation Huaiping Yang (Joint work with Bert Juettler) Johannes Kepler University of.
Gwangju Institute of Science and Technology Intelligent Design and Graphics Laboratory Multi-scale tensor voting for feature extraction from unstructured.
Dual/Primal Mesh Optimization for Polygonized Implicit Surfaces
Signal-Specialized Parameterization for Piecewise Linear Reconstruction Geetika Tewari, Harvard University John Snyder, Microsoft Research Pedro V. Sander,
Surface Simplification Using Quadric Error Metrics Michael Garland Paul S. Heckbert.
Xiaoguang Han Department of Computer Science Probation talk – D Human Reconstruction from Sparse Uncalibrated Views.
Cs: compressed sensing
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
ALIGNMENT OF 3D ARTICULATE SHAPES. Articulated registration Input: Two or more 3d point clouds (possibly with connectivity information) of an articulated.
Point Set Processing and Surface Reconstruction (
Global Parametrization of Range Image Sets Nico Pietroni, Marco Tarini, Olga Sorkine, Denis Zorin.
INFORMATIK Laplacian Surface Editing Olga Sorkine Daniel Cohen-Or Yaron Lipman Tel Aviv University Marc Alexa TU Darmstadt Christian Rössl Hans-Peter Seidel.
Computer Animation Rick Parent Computer Animation Algorithms and Techniques Optimization & Constraints Add mention of global techiques Add mention of calculus.
CSE554SimplificationSlide 1 CSE 554 Lecture 7: Simplification Fall 2013.
Andrew Nealen / Olga Sorkine / Mark Alexa / Daniel Cohen-Or SoHyeon Jeong 2007/03/02.
CSE554Fairing and simplificationSlide 1 CSE 554 Lecture 6: Fairing and Simplification Fall 2012.
Extraction and remeshing of ellipsoidal representations from mesh data Patricio Simari Karan Singh.
Hierarchical Error-Driven Approximation of Implicit Surfaces from Polygonal Meshes Takashi Kanai Yutaka Ohtake Kiwamu Kase University of Tokyo RIKEN, VCAD.
David Levin Tel-Aviv University Afrigraph 2009 Shape Preserving Deformation David Levin Tel-Aviv University Afrigraph 2009 Based on joint works with Yaron.
Using simplified meshes for crude registration of two partially overlapping range images Mercedes R.G.Márquez Wu Shin-Ting State University of Matogrosso.
+ Quadratic Programming and Duality Sivaraman Balakrishnan.
High Resolution Surface Reconstruction from Overlapping Multiple-Views
Mathematical Analysis of MaxEnt for Mixed Pixel Decomposition
METHOD OF STEEPEST DESCENT ELE Adaptive Signal Processing1 Week 5.
Course 5 Edge Detection. Image Features: local, meaningful, detectable parts of an image. edge corner texture … Edges: Edges points, or simply edges,
Instructor: Mircea Nicolescu Lecture 7
Curve Simplification under the L 2 -Norm Ben Berg Advisor: Pankaj Agarwal Mentor: Swaminathan Sankararaman.
Bigyan Ankur Mukherjee University of Utah. Given a set of Points P sampled from a surface Σ,  Find a Surface Σ * that “approximates” Σ  Σ * is generally.
Linear Discriminant Functions Chapter 5 (Duda et al.) CS479/679 Pattern Recognition Dr. George Bebis.
A Globally Optimal Algorithm for Robust TV-L 1 Range Image Integration Christopher Zach VRVis Research Center Thomas Pock, Horst Bischof.
CSE 554 Lecture 8: Alignment
Structure from motion Input: Output: (Tomasi and Kanade)
CSCI B609: “Foundations of Data Science”
Structure from Motion with Non-linear Least Squares
Optimal sparse representations in general overcomplete bases
CSE 554 Lecture 10: Extrinsic Deformations
Chap 10. Geometric Level of Detail
Structure from motion Input: Output: (Tomasi and Kanade)
Computer Animation Algorithms and Techniques
Structure from Motion with Non-linear Least Squares
Presentation transcript:

L1 sparse reconstruction of sharp point set surfaces HAIM AVRON, ANDREI SHARF, CHEN GREIF and DANIEL COHEN-OR

Index 3d surface reconstruction Reconstruction model Moving Least squares Moving away from least squares [l1 sparse recon] Reconstruction model Re-weighted l1 Results and discussions

3D surface reconstruction

Moving least squares Input: Output : How does one do that : Dense set of sample points that lie near a closed surface F with approximate surface normals. [in practice the normals are obtained by local least squares fitting of a plane to the sample points in a neighborhood ] Output : Generate a surface passing near the sample points. How does one do that : Linear point function that represents the local shape of the surface near point s. Combine these by a weighted average to produce a 3D function {I}, the surface is the zero implicit surface set of I. How good is it ? How close Is the function I to the signed distance function.

2D -> 1D

Total variation The l1-sparsity paradigm has been applied successfully to image denoising and de- blurring using Total Variation (TV) methods [Rudin 1987; Rudin et al. 1992; Chan et al. 2001; Levin et al. 2007] Total variation utilizes the sparsity in variation of gradients in an image. Dij is the discrete gradient operator , u is the scalar value The corresponding term for gradient in a mesh is the normal of the simplex (triangle)

Reconstruction model Error term : Smooth surfaces have smoothly varying normals Penalty function (error) defined on the normals Total curvature Quadratic ; instead use Pair wise normal difference l2 norm Pi and pj are adjacent points pairwise penalty

Reweighted l1 Consists of solving a sequence of weighted l1 minimization problems. where the weights used for the next iteration are computed from the value of the current solution. Each iteration solves a convex optimization, The over all algorithm does not. [Enhancing Sparsity by Reweighteed l1 Minimiaztion , Candes 2008]

Reweighted l1 What is the key difference between l1 and l0 ? Dependence on magnitude.

Reweighted l1

Geometric view error Minimize L2 –norm [sum of square errors] Minimize L1 norm [sum of differences] Minimize L0 norm [number of non zeros terms]

2 steps

Orientation reconstruction Orientation minimization consists of two terms Global l1 minimization of orientation (normal) distances. Constraining the solution to be close to the initial orientation.

Orientation reconstruction ctd Orientation minimization consists of two terms Global l1 minimization of orientation (normal) distances. For a piece-wise smooth surface the set Is sparse … why ? Globally weighted l1 penalty function

Orientation reconstruction ctd For a piece-wise smooth surface the set Is sparse Globally weighted l1 penalty function

Orientation reconstruction Orientation minimization consists of two terms Global l1 minimization of orientation (normal) distances. Constraining the solution to be close to the initial orientation.

Key idea !!!

Geometric view error Minimize L2 –norm [sum of square errors] Minimize L1 norm [sum of differences] Minimize L0 norm [number of non zeros terms]

Results and Discussions Advantages Global frame work Till now sharpness was a local concept Criticisms Slow In reality the convex optimization although there are readily available solutions is a slow process.

Discussions A lot of room for improvement Can I express this as a different form ? Specially like the low rank and sparse error form we had before.