Download presentation

Presentation is loading. Please wait.

Published byMaude Hardy Modified about 1 year ago

1
Figure-centric averages Antonio Torralba & Aude Oliva (2002) Averages: Hundreds of images containing a person are averaged to reveal regularities in the intensity patterns across all the images.

2
More by Jason Salavon More at:

3
“100 Special Moments” by Jason Salavon Why blurry?

4
Computing Means Two Requirements: Alignment of objects Objects must span a subspace Useful concepts: Subpopulation means Deviations from the mean

5
Images as Vectors = m n n*m

6
Vector Mean: Importance of Alignment = m n n*m = ½ + ½ = mean image

7
How to align faces?

8
Shape Vector = 43 Provides alignment!

9
Average Face 1. Warp to mean shape 2. Average pixels

10
Objects must span a subspace (1,0) (0,1) (.5,.5)

11
Example Does not span a subspace mean

12
Subpopulation means Examples: Happy faces Young faces Asian faces Etc. Sunny days Rainy days Etc. Average male Average female

13
Deviations from the mean - = Image X Mean X X = X - X

14
Deviations from the mean += = X X = X - X

15
Manipulating Facial Appearance through Shape and Color Duncan A. Rowland and David I. Perrett St Andrews University IEEE CG&A, September 1995

16
Face Modeling Compute average faces (color and shape) Compute deviations between male and female (vector and color differences)

17
Changing gender Deform shape and/or color of an input face in the direction of “more female” original shape colorboth

18
Enhancing gender more same original androgynous more opposite

19
Changing age Face becomes “rounder” and “more textured” and “grayer” original shape colorboth

20
Back to the Subspace

21
Linear Subspace: convex combinations Any new image X can be obtained as weighted sum of stored “basis” images. Our old friend, change of basis! What are the new coordinates of X?

22
The Morphable Face Model The actual structure of a face is captured in the shape vector S = (x 1, y 1, x 2, …, y n ) T, containing the (x, y) coordinates of the n vertices of a face, and the appearance (texture) vector T = (R 1, G 1, B 1, R 2, …, G n, B n ) T, containing the color values of the mean-warped face image. Shape S Appearance T

23
The Morphable face model Again, assuming that we have m such vector pairs in full correspondence, we can form new shapes S model and new appearances T model as: If number of basis faces m is large enough to span the face subspace then: Any new face can be represented as a pair of vectors ( 1, 2 m ) T and ( 1, 2 m ) T !

24
Using 3D Geometry: Blinz & Vetter, 1999 show SIGGRAPH video

25
Computer Science Erik Learned-Miller Joint Alignment: What’s It Good For?

26
26 Learned-Miller Congealing (CVPR 2000, PAMI 2006)

27
27 Learned-Miller Five Applications Image factorizations For transfer learning, learning from one example Alignment for Data Pooling 3D MR registration EEG registration Artifact removal Magnetic resonance bias removal Improvements to recognition algorithms Alignment before recognition Defining anchor points for registration Find highly repeatable regions for future registrations

28
28 Learned-Miller Congealing Process of joint “alignment” of sets of arrays (samples of continuous fields). 3 ingredients A set of arrays in some class A parameterized family of continuous transformations A criterion of joint alignment

29
29 Learned-Miller Congealing Binary Digits 3 ingredients A set of arrays in some class: Binary images A parameterized family of continuous transformations: Affine transforms A criterion of joint alignment: Entropy minimization

30
30 Learned-Miller Criterion of Joint Alignment Minimize sum of pixel stack entropies by transforming each image. A pixel stack

31
31 Learned-Miller Observed Image “Latent Image” Transform (Previous work by Grenander,, Frey and Jojic.) An Image Factorization

32
32 Learned-Miller A pixel stack

33
33 Learned-Miller The Independent Pixel Assumption Model assumes independent pixels A poor generative model: True image probabilities don’t match model probabilities. Reason: heavy dependence of neighboring pixels. However! This model is great for alignment and separation of causes! Why? Relative probabilities of “better aligned” and “worse aligned” are usually correct. Once components are separated, a more accurate (and computationally expensive) model can be used to model each component.

34
34 Learned-Miller BeforeAfter Each pair implicitly creates a sample of the transform T. Congealing A transform

35
35 Learned-Miller Character Models Latent Images Transforms Image Kernel Density Estimator (or other estimator) Transform Kernel Density Estimator (CVPR 2003) Latent Image Probability Density for Zeroes P(I L ) Transform Probability Density for Zeroes P(T) Congealing

36
36 Learned-Miller How do we line up a new image? Sequence of successively “sharper” models … step 0 step 1 step N … Take one gradient step with respect to each model.

37
37 Learned-Miller Digit Models from One Example

Similar presentations

© 2016 SlidePlayer.com Inc.

All rights reserved.

Ads by Google