Presentation is loading. Please wait.

Presentation is loading. Please wait.

Richard G. Baraniuk Chinmay Hegde Sriram Nagaraj Manifold Learning in the Wild A New Manifold Modeling and Learning Framework for Image Ensembles Aswin.

Similar presentations


Presentation on theme: "Richard G. Baraniuk Chinmay Hegde Sriram Nagaraj Manifold Learning in the Wild A New Manifold Modeling and Learning Framework for Image Ensembles Aswin."— Presentation transcript:

1 Richard G. Baraniuk Chinmay Hegde Sriram Nagaraj Manifold Learning in the Wild A New Manifold Modeling and Learning Framework for Image Ensembles Aswin C. Sankaranarayanan Rice University

2 Sensor Data Deluge

3 Concise Models Efficient processing / compression requires concise representation Our interest in this talk: Collections of images

4 Concise Models Our interest in this talk: Collections of image parameterized by  \in  –translations of an object : x-offset and y-offset –rotations of a 3D object pitch, roll, yaw –wedgelets : orientation and offset

5 Concise Models Our interest in this talk: Collections of image parameterized by  \in  –translations of an object : x-offset and y-offset –rotations of a 3D object pitch, roll, yaw –wedgelets : orientation and offset Image articulation manifold

6 Image Articulation Manifold N-pixel images: K-dimensional articulation space Then is a K-dimensional manifold in the ambient space Very concise model articulation parameter space

7 Smooth IAMs N-pixel images: Local isometry image distance parameter space distance Linear tangent spaces are close approximation locally Low dimensional articulation space articulation parameter space

8 Smooth IAMs articulation parameter space N-pixel images: Local isometry image distance parameter space distance Linear tangent spaces are close approximation locally Low dimensional articulation space

9 Smooth IAMs articulation parameter space N-pixel images: Local isometry image distance parameter space distance Linear tangent spaces are close approximation locally Low dimensional articulation space

10 Ex: Manifold Learning LLE ISOMAP LE HE Diff. Geo … K=1 rotation

11 Ex: Manifold Learning K=2 rotation and scale

12 Theory/Practice Disconnect Smoothness Practical image manifolds are not smooth! If images have sharp edges, then manifold is everywhere non-differentiable [Donoho and Grimes] Tangent approximations ?

13 Theory/Practice Disconnect Smoothness Practical image manifolds are not smooth! If images have sharp edges, then manifold is everywhere non-differentiable [Donoho and Grimes] Tangent approximations ?

14 Failure of Tangent Plane Approx. Ex: cross-fading when synthesizing / interpolating images that should lie on manifold Input Image Geodesic Linear path

15 Ex:translation manifold all blue images are equidistant from the red image Local isometry –satisfied only when sampling is dense Theory/Practice Disconnect Isometry

16 Theory/Practice Disconnect Nuisance articulations Unsupervised data, invariably, has additional undesired articulations –Illumination –Background clutter, occlusions, … Image ensemble is no longer low-dimensional

17 Image representations Conventional representation for an image –A vector of pixels –Inadequate! pixel image

18 Image representations Conventional representation for an image –A vector of pixels –Inadequate! Remainder of the talk –TWO novel image representations that alleviate the theoretical/practical challenges in manifold learning on image ensembles

19 Transport operators for image manifolds

20 The concept of Transport operators Beyond point cloud model for image manifolds Example

21 Example: Translation 2D Translation manifold Set of all transport operators = Beyond a point cloud model –Action of the articulation is more accurate and meaningful

22 Optical Flow Generalizing this idea: Pixel correspondances Idea:OF between two images is a natural and accurate transport operator (Figures from Ce Liu’s optical flow page) OF from I 1 to I 2 I 1 and I 2

23 Optical Flow Transport IAM Articulations Consider a reference image and a K-dimensional articulation Collect optical flows from to all images reachable by a K-dimensional articulation

24 Optical Flow Transport IAM Articulations Consider a reference image and a K-dimensional articulation Collect optical flows from to all images reachable by a K-dimensional articulation

25 Optical Flow Transport IAM OFM at Articulations Consider a reference image and a K-dimensional articulation Collect optical flows from to all images reachable by a K-dimensional articulation Collection of OFs is a smooth, K- dimensional manifold (even if IAM is not smooth) for large class of articulations

26 OFM is Smooth (Rotation) Articulation θ in [ ⁰ ] Intensity I(θ) Op. flow v(θ) Pixel intensity at 3 points Flow (nearly linear)

27 Main results Local model at each Each point on the OFM defines a transport operator –Each transport operator maps to one of its neighbors For a large class of articulations, OFMs are smooth and locally isometric –Traditional manifold processing techniques work on OFMs IAM OFM at Articulations

28 Linking it all together IAM OFM at Articulations Nonlinear dim. reduction The non-differentiability does not dissappear --- it is embedded in the mapping from OFM to the IAM. However, this is a known map

29 The Story So Far… IAM OFM at Articulations Tangent space at Articulations IAM

30 Input Image Geodesic Linear path IAM OFM

31 Data 196 images of two bears moving linearly and independently IAM OFM Task Find low-dimensional embedding OFM Manifold Learning

32 IAM OFM Data 196 images of a cup moving on a plane Task 1 Find low-dimensional embedding Task 2 Parameter estimation for new images (tracing an “R”) OFM ML + Parameter Estimation

33 Point on the manifold such that the sum of geodesic distances to every other point is minimized Important concept in nonlinear data modeling, compression, shape analysis [Srivastava et al] Karcher Mean 10 images from an IAM ground truth KM OFM KM linear KM

34 Sparse keypoint-based image representation

35 Image representations Conventional representation for an image –A vector of pixels –Inadequate! pixel image

36 Image representations Replace vector of pixels with an abstract bag of features –Ex: SIFT (Scale Invariant Feature Transform) selects keypoint locations in an image and computes keypoint descriptors for each keypoint

37 Image representations Replace vector of pixels with an abstract bag of features –Ex: SIFT (Scale Invariant Feature Transform) selects keypoint locations in an image and computes keypoint descriptors for each keypoint –Feature descriptors are local; it is very easy to make them robust to nuisance imaging parameters

38 Loss of Geometrical Info Bag of features representations hide potentially useful image geometry Goal: make salient image geometrical info more explicit for exploitation Image space Keypoint space

39 Key idea Keypoint space can be endowed with a rich low-dimensional structure in many situations Mechanism: define kernels, between keypoint locations, keypoint descriptors

40 Keypoint Kernel Keypoint space can be endowed with a rich low-dimensional structure in many situations Mechanism: define kernels, between keypoint locations, keypoint descriptors Joint keypoint kernel between two images is given by

41 Keypoint Geometry Theorem: Under the metric induced by the kernel certain ensembles of articulating images form smooth, isometric manifolds In contrast: conventional approach to image fusion via image articulation manifolds (IAMs) fraught with non-differentiability (due to sharp image edges) –not smooth –not isometric

42 Application: Manifold Learning 2D Translation

43 Application: Manifold Learning 2D Translation IAM KAM

44 Manifold Learning in the Wild Rice University’s Duncan Hall Lobby –158 images –360° panorama using handheld camera –Varying brightness, clutter

45 Duncan Hall Lobby Ground truth using state of the art structure-from-motion software Manifold Learning in the Wild Ground truthIAMKAM

46 Internet scale imagery Notre-dame cathedral –738 images –Collected from Flickr –Large variations in illumination (night/day/saturations), clutter (people, decorations), camera parameters (focal length, fov, …) –Non-uniform sampling of the space

47 Organization k-nearest neighbors

48 Organization “geodesics’ 3D rotation “Walk-closer” “zoom-out”

49 Summary Need for novel image representations –Transport operators  Enables differentiability and meaningful transport –Sparse features  Robustness to outliers, nuisance articulations, etc.  Learning in the wild: unsupervised imagery True power of manifold signal processing lies in fast algorithms that mainly use neighbor relationships –What are compelling applications where such methods can achieve state of the art performance ?

50

51 Summary IAMs a useful concise model for many image processing problems involving image collections and multiple sensors/viewpoints But practical IAMs are non-differentiable –IAM-based algorithms have not lived up to their promise Optical flow manifolds (OFMs) –smooth even when IAM is not –OFM ~ nonlinear tangent space –support accurate image synthesis, learning, charting, … Barely discussed here: OF enables the safe extension of differential geometry concepts –Log/Exp maps, Karcher mean, parallel transport, …

52 Summary: Rice U Bag of features representations pervasive in image information fusion applications (ex: SIFT) Progress to date: –Bags of features can contain rich geometrical structure –Keypoint distance very efficient to compute (contrast with combinatorial complexity of feature correspondence algorithm) –Keypoint distance significantly outperforms standard image Euclidean distance in learning and fusion applications –Punch line: KAM preferred over IAM Current/future work: –More exhaustive experiments with occlusion and clutter –Studying geometrical structure of feature representations of other kinds of non-image data (ex: documents)

53 Open Questions Our treatment is specific to image manifolds under brightness constancy What are the natural transport operators for other data manifolds? dsp.rice.edu

54 Related Work Analytic transport operators –transport operator has group structure [Xiao and Rao 07][Culpepper and Olshausen 09] [Miller and Younes 01] [Tuzel et al 08] –non-linear analytics [Dollar et al 06] –spatio-temporal manifolds [Li and Chellappa 10] –shape manifolds [Klassen et al 04] Analytic approach limited to a small class of standard image transformations (ex: affine transformations, Lie groups) In contrast, OFM approach works reliably with real-world image samples (point clouds) and broader class of transformations

55 Limitations Brightness constancy –Optical flow is no longer meaningful Occlusion –Undefined pixel flow in theory, arbitrary flow estimates in practice –Heuristics to deal with it Changing backgrounds etc. –Transport operator assumption too strict –Sparse correspondences ?

56

57 Occlusion Detect occlusion using forward-backward flow reasoning Remove occluded pixel computations Heuristic --- formal occlusion handling is hard Occluded

58 Open Questions Theorem: random measurements stably embed a K-dim manifold whp [B, Wakin, FOCM ’08] Q: Is there an analogous result for OFMs?

59 Tools for manifold processing Geodesics, exponential maps, log-maps, Riemannian metrics, Karcher means, … Smooth diff manifold Algebraic manifoldsData manifolds LLE, kNN graphs Point cloud model

60

61 Concise Models Efficient processing / compression requires concise representation Sparsity of an individual image pixels large wavelet coefficients (blue = 0)

62 History of Optical Flow Dark ages (<1985) –special cases solved –LBC an under-determined set of linear equations Horn and Schunk (1985) –Regularization term: smoothness prior on the flow Brox et al (2005) –shows that linearization of brightness constancy (BC) is a bad assumption –develops optimization framework to handle BC directly Brox et al (2010), Black et al (2010), Liu et al (2010) –practical systems with reliable code

63 ISOMAP embedding error for OFM and IAM 2D rotations Reference image Manifold Learning

64 Embedding of OFM 2D rotations Reference image

65 OFM Implementation details Reference Image

66 Pairwise distances and embedding

67 Flow Embedding

68 OFM Synthesis

69 Goal: build a generative model for an entire IAM/OFM based on a small number of base images eLCheapo TM algorithm: –choose a reference image randomly –find all images that can be generated from this image by OF –compute Karcher (geodesic) mean of these images –compute OF from Karcher mean image to other images –repeat on the remaining images until no images remain Exact representation when no occlusions Manifold Charting

70 Goal: build a generative model for an entire IAM/OFM based on a small number of base images Ex:cube rotating about axis. All cube images can be representing using 4 reference images + OFMs Many applications –selection of target templates for classification –“next-view” selection for adaptive sensing applications

71

72 Features (including SIFT) ubiquitous in fusion and processing apps (15k+ cites for 2 SIFT papers) SIFT Features building 3D models part-based object recognition organizing internet-scale databases image stitching Figures courtesy Rob Fergus (NYU), Phototourism website, Antonio Torralba (MIT), and Wei Lu

73 Many Possible Kernels Euclidean kernel Gaussian kernel Polynomial kernel Pyramid match kernel [Grauman et al. ’07] Many others

74 Keypoint Kernel Joint keypoint kernel between two images is given by Using Euclidean/Gaussian (E/G) combination yields

75 From Kernel to Metric Theorem: The E/G keypoint kernel is a Mercer kernel –enables algorithms such as SVM Theorem: The E/G keypoint kernel induces a metric on the space of images –alternative to conventional L 2 distance between images –keypoint metric robust to nuisance imaging parameters, occlusion, clutter, etc.

76 Keypoint Geometry NOTE:Metric requires no permutation-based matching of keypoints (combinatorial complexity in general) Theorem: The E/G keypoint kernel induces a metric on the space of images

77 Keypoint Geometry Theorem: Under the metric induced by the kernel certain ensembles of articulating images form smooth, isometric manifolds Keypoint representation compact, efficient, and … Robust to illumination variations, non-stationary backgrounds, clutter, occlusions

78 Manifold Learning in the Wild Viewing angle – 179 images IAM KAM

79 Manifold Learning in the Wild Rice University’s Brochstein Pavilion –400 outdoor images of a building –occlusions, movement in foreground, varying background

80 Manifold Learning in the Wild Brochstein Pavilion –400 outdoor images of a building –occlusions, movement in foreground, background IAMKAM


Download ppt "Richard G. Baraniuk Chinmay Hegde Sriram Nagaraj Manifold Learning in the Wild A New Manifold Modeling and Learning Framework for Image Ensembles Aswin."

Similar presentations


Ads by Google