Presentation on theme: "Figure-centric averages Antonio Torralba & Aude Oliva (2002) Averages: Hundreds of images containing a person are averaged to reveal regularities in the."— Presentation transcript:
Figure-centric averages Antonio Torralba & Aude Oliva (2002) Averages: Hundreds of images containing a person are averaged to reveal regularities in the intensity patterns across all the images.
More by Jason Salavon More at: http://www.salavon.com/http://www.salavon.com
“100 Special Moments” by Jason Salavon Why blurry?
Computing Means Two Requirements: Alignment of objects Objects must span a subspace Useful concepts: Subpopulation means Deviations from the mean
Linear Subspace: convex combinations Any new image X can be obtained as weighted sum of stored “basis” images. Our old friend, change of basis! What are the new coordinates of X?
The Morphable Face Model The actual structure of a face is captured in the shape vector S = (x 1, y 1, x 2, …, y n ) T, containing the (x, y) coordinates of the n vertices of a face, and the appearance (texture) vector T = (R 1, G 1, B 1, R 2, …, G n, B n ) T, containing the color values of the mean-warped face image. Shape S Appearance T
The Morphable face model Again, assuming that we have m such vector pairs in full correspondence, we can form new shapes S model and new appearances T model as: If number of basis faces m is large enough to span the face subspace then: Any new face can be represented as a pair of vectors ( 1, 2 m ) T and ( 1, 2 m ) T !
Using 3D Geometry: Blinz & Vetter, 1999 show SIGGRAPH video
Computer Science Erik Learned-Miller Joint Alignment: What’s It Good For?
27 Learned-Miller Five Applications Image factorizations For transfer learning, learning from one example Alignment for Data Pooling 3D MR registration EEG registration Artifact removal Magnetic resonance bias removal Improvements to recognition algorithms Alignment before recognition Defining anchor points for registration Find highly repeatable regions for future registrations
28 Learned-Miller Congealing Process of joint “alignment” of sets of arrays (samples of continuous fields). 3 ingredients A set of arrays in some class A parameterized family of continuous transformations A criterion of joint alignment
29 Learned-Miller Congealing Binary Digits 3 ingredients A set of arrays in some class: Binary images A parameterized family of continuous transformations: Affine transforms A criterion of joint alignment: Entropy minimization
30 Learned-Miller Criterion of Joint Alignment Minimize sum of pixel stack entropies by transforming each image. A pixel stack
31 Learned-Miller Observed Image “Latent Image” Transform (Previous work by Grenander,, Frey and Jojic.) An Image Factorization
33 Learned-Miller The Independent Pixel Assumption Model assumes independent pixels A poor generative model: True image probabilities don’t match model probabilities. Reason: heavy dependence of neighboring pixels. However! This model is great for alignment and separation of causes! Why? Relative probabilities of “better aligned” and “worse aligned” are usually correct. Once components are separated, a more accurate (and computationally expensive) model can be used to model each component.
34 Learned-Miller BeforeAfter Each pair implicitly creates a sample of the transform T. Congealing A transform
35 Learned-Miller Character Models Latent Images Transforms Image Kernel Density Estimator (or other estimator) Transform Kernel Density Estimator (CVPR 2003) Latent Image Probability Density for Zeroes P(I L ) Transform Probability Density for Zeroes P(T) Congealing
36 Learned-Miller How do we line up a new image? Sequence of successively “sharper” models … step 0 step 1 step N … Take one gradient step with respect to each model.
37 Learned-Miller Digit Models from One Example