Download presentation
Presentation is loading. Please wait.
1
Image Manifolds 16-721: Learning-based Methods in Vision Alexei Efros, CMU, Spring 2007 © A.A. Efros With slides by Dave Thompson
2
Images as Vectors = m n n*m
3
Importance of Alignment = m n n*m = =?
4
Text Synthesis [ Shannon,’48] proposed a way to generate English- looking text using N-grams: Assume a generalized Markov model Use a large text to compute prob. distributions of each letter given N-1 previous letters Starting from a seed repeatedly sample this Markov chain to generate new letters Also works for whole words WE NEEDTOEATCAKE
5
Mark V. Shaney (Bell Labs) Results (using alt.singles corpus): “As I've commented before, really relating to someone involves standing next to impossible.” “One morning I shot an elephant in my arms and kissed him.” “I spent an interesting evening recently with a grain of salt”
6
Video Textures Arno Schödl Richard Szeliski David Salesin Irfan Essa Microsoft Research, Georgia Tech
7
Video textures
8
Our approach How do we find good transitions?
9
Finding good transitions Compute L 2 distance D i, j between all frames Similar frames make good transitions frame ivs. frame j
10
Markov chain representation Similar frames make good transitions
11
Transition costs Transition from i to j if successor of i is similar to j Cost function: C i j = D i+1, j
12
Transition probabilities Probability for transition P i j inversely related to cost: P i j ~ exp ( – C i j / 2 ) high low
13
Preserving dynamics
15
Cost for transition i j C i j = w k D i+k+1, j+k
16
Preserving dynamics – effect Cost for transition i j C i j = w k D i+k+1, j+k
17
Video sprite extraction
18
Video sprite control Augmented transition cost:
19
Interactive fish
20
Advanced Perception David R. Thompson manifold learning with applications to object recognition
21
plenoptic function manifolds in vision
22
appearance variation manifolds in vision images from hormel corp.
23
deformation manifolds in vision images from www.golfswingphotos.com
24
Find a low-D basis for describing high-D data. X ~= X' S.T. dim(X') << dim(X) uncovers the intrinsic dimensionality manifold learning
25
If we knew all pairwise distances… ChicagoRaleighBostonSeattleS.F.AustinOrlando Chicago0 Raleigh6410 Boston8516080 Seattle1733236324880 S.F.1855240626966840 Austin97211671691176414950 Orlando99452011052565245810150 Distances calculated with geobytes.com/CityDistanceTool
26
Multidimensional Scaling (MDS) For n data points, and a distance matrix D, D ij =...we can construct a m-dimensional space to preserve inter-point distances by using the top eigenvectors of D scaled by their eigenvalues j i
27
MDS result in 2D
28
Actual plot of cities
29
Don’t know distances
30
Don’t know distnaces
31
1. data compression 2. “curse of dimensionality” 3. de-noising 4. visualization 5. reasonable distance metrics why do manifold learning?
32
reasonable distance metrics ?
33
? linear interpolation
34
reasonable distance metrics ? manifold interpolation
35
Isomap for images Build a data graph G. Vertices: images (u,v) is an edge iff SSD(u,v) is small For any two images, we approximate the distance between them with the “shortest path” on G
36
Isomap 1. Build a sparse graph with K-nearest neighbors D g = (distance matrix is sparse)
37
Isomap 2. Infer other interpoint distances by finding shortest paths on the graph (Dijkstra's algorithm). D g =
38
Isomap shortest-distance on a graph is easy to compute
39
Isomap results: hands
40
- preserves global structure - few free parameters - sensitive to noise, noise edges - computationally expensive (dense matrix eigen-reduction) Isomap: pro and con
41
Leakage problem
42
Find a mapping to preserve local linear relationships between neighbors Locally Linear Embedding
43
Locally Linear Embedding
44
1. Find weight matrix W of linear coefficients: Enforce sum-to-one constraint. LLE: Two key steps
45
2. Find projected vectors Y to minimize reconstruction error must solve for whole dataset simultaneously LLE: Two key steps
46
LLE: Result preserves local topology PCA LLE
47
- no local minima, one free parameter - incremental & fast - simple linear algebra operations - can distort global structure LLE: pro and con
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.