Download presentation

Presentation is loading. Please wait.

Published byNia Blaise Modified about 1 year ago

1
Loss-based Visual Learning with Weak Supervision M. Pawan Kumar Joint work with Pierre-Yves Baudin, Danny Goodman, Puneet Kumar, Nikos Paragios, Noura Azzabou, Pierre Carlier

2
SPLENDID Nikos Paragios Equipe Galen INRIA Saclay Daphne Koller DAGS Stanford Machine Learning Weak Annotations Noisy Annotations Applications Computer Vision Medical Imaging Self-Paced Learning for Exploiting Noisy, Diverse or Incomplete Data 2 Visits from INRIA to Stanford 1 Visit from Stanford to INRIA 2012ICML 3 Visits Planned2013MICCAI

3
Medical Image Segmentation MRI Acquisitions of the thigh

4
Medical Image Segmentation MRI Acquisitions of the thigh Segments correspond to muscle groups

5
Random Walks Segmentation Probabilistic segmentation algorithm Computationally efficient Interactive segmentation Automated shape prior driven segmentation L. Grady, 2006 L. Grady, 2005; Baudin et al., 2012

6
Random Walks Segmentation y(i,s): Probability that voxel ‘i’ belongs to segment ‘s’ x: Medical acquisition min y E(x,y) = y T L(x)y + w shape ||y-y 0 || 2 Positive semi-definite Laplacian matrix Shape prior on the segmentation Parameter of the RW algorithm Convex Hand-tuned

7
Random Walks Segmentation Several Laplacians L(x) = Σ α w α L α (x) Several shape and appearance priors Σ β w β ||y-y β || 2 Hand-tuning large number of parameters is onerous

8
Parameter Estimation Learn the best parameters from training data Σ α w α y T L α (x)y + Σ β w β ||y-y β || 2

9
Parameter Estimation Learn the best parameters from training data w T Ψ(x,y) w is the set of all parameters Ψ(x,y) is the joint feature vector of input and output

10
Parameter Estimation –Supervised Learning –Hard vs. Soft Segmentation –Mathematical Formulation Optimization Experiments Related and Future Work in SPLENDID Outline

11
Supervised Learning Dataset of segmented fMRIs Sample x k, voxel i z k (i,s) = 1, s is ground-truth 0, otherwise Probabilistic segmentation??

12
Supervised Learning w T Ψ(x k,z k ) Energy of Ground-truth w T Ψ(x k,ŷ) Energy of Segmentation - ≥ Δ(ŷ,z k )- ξ k min w Σ k ξ k + λ||w|| 2 Δ(ŷ,z k ) = Fraction of incorrectly labeled voxels Taskar et al., 2003; Tsochantardis et al., 2004 Structured-output Support Vector Machine

13
Supervised Learning Convex with several efficient algorithms No parameter provides ‘hard’ segmentation We only need a correct ‘soft’ probabilistic segmentation

14
Parameter Estimation –Supervised Learning –Hard vs. Soft Segmentation –Mathematical Formulation Optimization Experiments Related and Future Work in SPLENDID Outline

15
Hard vs. Soft Segmentation Hard segmentation z k Don’t require 0-1 probabilities

16
Hard vs. Soft Segmentation Soft segmentation y k Compatible with z k Binarizing y k gives z k

17
Hard vs. Soft Segmentation y k C(z k ) Soft segmentation y k Compatible with z k Which y k to use?? y k provided by best parameter Unknown

18
Parameter Estimation –Supervised Learning –Hard vs. Soft Segmentation –Mathematical Formulation Optimization Experiments Related and Future Work in SPLENDID Outline

19
Learning with Hard Segmentation w T Ψ(x k,z k )w T Ψ(x k,ŷ)- ≥ Δ(ŷ,z k )- ξ k min w Σ k ξ k + λ||w|| 2

20
Learning with Soft Segmentation w T Ψ(x k,y k )w T Ψ(x k,ŷ)- ≥ Δ(ŷ,z k )- ξ k min w Σ k ξ k + λ||w|| 2

21
Learning with Soft Segmentation w T Ψ(x k,y k )w T Ψ(x k,ŷ)- ≥ Δ(ŷ,z k )- ξ k min w Σ k ξ k + λ||w|| 2 Smola et al., 2005; Felzenszwalb et al., 2008; Yu et al., 2009 Latent Support Vector Machine min y k y k C(z k )

22
Parameter Estimation Optimization Experiments Related and Future Work in SPLENDID Outline

23
Latent SVM Difference-of-convex problem min w Σ k ξ k + λ||w|| 2 w T Ψ(x k,ŷ) – min y k w T Ψ(x k,y k ) ≥ Δ(ŷ,z k ) – ξ k y k C(z k ) Concave-Convex Procedure (CCCP)

24
CCCP y k * = min y k w T Ψ(x k,y k ) s.t. y k C(z k ) Repeat until convergence Estimate soft segmentation Update parameters min w Σ k ξ k + λ||w|| 2 w T Ψ(x k,ŷ) – w T Ψ(x k,y k *) ≥ Δ(ŷ,z k ) – ξ k Efficient optimization using dual decomposition Convex optimization

25
Parameter Estimation Optimization Experiments Related and Future Work in SPLENDID Outline

26
Dataset 30 MRI volumes of thigh Dimensions: 224 x 224 x muscle groups + background 80% for training, 20% for testing

27
Parameters 4 Laplacians 2 shape priors 1 appearance prior Baudin et al., 2012 Grady, 2005

28
Baselines Hand-tuned parameters Structured-output SVM Soft segmentation based on signed distance transform Hard segmentation

29
Results Small but statistically significant improvement

30
Parameter Estimation Optimization Experiments Related and Future Work in SPLENDID Outline

31
Loss-based Learning x: Inputa: Annotation

32
Loss-based Learning x: Inputa: Annotationh: Hidden information h a = “jumping”h = “soft-segmentation”

33
Loss-based Learning min Σ k Δ(correct a k, predicted a k ) Annotation Mismatch x: Inputa: Annotationh: Hidden information h a = “jumping”h = “soft-segmentation”

34
Loss-based Learning min Σ k Δ(correct a k, predicted a k ) Annotation Mismatch Small improvement using small medical dataset

35
Loss-based Learning min Σ k Δ(correct a k, predicted a k ) Annotation Mismatch Large improvement using large vision dataset

36
Loss-based Learning min Σ k Δ(correct {a k,h k }, predicted {a k,h k }) Modeled using a distributionOutput Mismatch Kumar, Packer and Koller, ICML 2012 Inexpensive annotation No experts required Richer models can be learnt

37
Questions?

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google