Active Appearance Models based on the article: T.F.Cootes, G.J. Edwards and C.J.Taylor. "Active Appearance Models", 1998. presented by Denis Simakov.

Slides:



Advertisements
Similar presentations
Face Recognition Sumitha Balasuriya.
Advertisements

Active Appearance Models
Scale & Affine Invariant Interest Point Detectors Mikolajczyk & Schmid presented by Dustin Lennon.
Active Shape Models Suppose we have a statistical shape model –Trained from sets of examples How do we use it to interpret new images? Use an “Active Shape.
Automatic determination of skeletal age from hand radiographs of children Image Science Institute Utrecht University C.A.Maas.
MASKS © 2004 Invitation to 3D vision Lecture 7 Step-by-Step Model Buidling.
PCA + SVD.
Automatic Feature Extraction for Multi-view 3D Face Recognition
Intensity-based deformable registration of 2D fluoroscopic X- ray images to a 3D CT model Aviv Hurvitz Advisor: Prof. Leo Joskowicz.
Wangfei Ningbo University A Brief Introduction to Active Appearance Models.
AAM based Face Tracking with Temporal Matching and Face Segmentation Dalong Du.
Instructor: Mircea Nicolescu Lecture 13 CS 485 / 685 Computer Vision.
, Tim Landgraf Active Appearance Models AG KI, Journal Club 03 Nov 2008.
A 4-WEEK PROJECT IN Active Shape and Appearance Models
Mosaics con’t CSE 455, Winter 2010 February 10, 2010.
Exchanging Faces in Images SIGGRAPH ’04 Blanz V., Scherbaum K., Vetter T., Seidel HP. Speaker: Alvin Date: 21 July 2004.
Motion Analysis Slides are from RPI Registration Class.
A Study of Approaches for Object Recognition
Active Appearance Models master thesis presentation Mikkel B. Stegmann IMM – June 20th 2000.
Appearance Models Shape models represent shape variation Eigen-models can represent texture variation Combined appearance models represent both.
Direct Methods for Visual Scene Reconstruction Paper by Richard Szeliski & Sing Bing Kang Presented by Kristin Branson November 7, 2002.
Rodent Behavior Analysis Tom Henderson Vision Based Behavior Analysis Universitaet Karlsruhe (TH) 12 November /9.
Real-time Combined 2D+3D Active Appearance Models Jing Xiao, Simon Baker,Iain Matthew, and Takeo Kanade CVPR 2004 Presented by Pat Chan 23/11/2004.
Face Recognition Jeremy Wyatt.
Active Appearance Models Computer examples A. Torralba T. F. Cootes, C.J. Taylor, G. J. Edwards M. B. Stegmann.
Active Appearance Models Suppose we have a statistical appearance model –Trained from sets of examples How do we use it to interpret new images? Use an.
Presented by Pat Chan Pik Wah 28/04/2005 Qualifying Examination
October 14, 2010Neural Networks Lecture 12: Backpropagation Examples 1 Example I: Predicting the Weather We decide (or experimentally determine) to use.
Algorithm Evaluation and Error Analysis class 7 Multiple View Geometry Comp Marc Pollefeys.
Statistical Shape Models Eigenpatches model regions –Assume shape is fixed –What if it isn’t? Faces with expression changes, organs in medical images etc.
PhD Thesis. Biometrics Science studying measurements and statistics of biological data Most relevant application: id. recognition 2.
CS 485/685 Computer Vision Face Recognition Using Principal Components Analysis (PCA) M. Turk, A. Pentland, "Eigenfaces for Recognition", Journal of Cognitive.
Face Recognition Using Neural Networks Presented By: Hadis Mohseni Leila Taghavi Atefeh Mirsafian.
Active Appearance Models for Face Detection
CSE 185 Introduction to Computer Vision
CSC 589 Lecture 22 Image Alignment and least square methods Bei Xiao American University April 13.
Statistical Models of Appearance for Computer Vision
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2014.
Active Shape Models: Their Training and Applications Cootes, Taylor, et al. Robert Tamburo July 6, 2000 Prelim Presentation.
Last tuesday, you talked about active shape models Data set of 1,500 hand-labeled faces 20 facial features (eyes, eye brows, nose, mouth, chin) Train 40.
Multimodal Interaction Dr. Mike Spann
Graphite 2004 Statistical Synthesis of Facial Expressions for the Portrayal of Emotion Lisa Gralewski Bristol University United Kingdom
CSE554AlignmentSlide 1 CSE 554 Lecture 5: Alignment Fall 2011.
1 ECE 738 Paper presentation Paper: Active Appearance Models Author: T.F.Cootes, G.J. Edwards and C.J.Taylor Student: Zhaozheng Yin Instructor: Dr. Yuhen.
Introduction EE 520: Image Analysis & Computer Vision.
CSCE 643 Computer Vision: Structure from Motion
Vision-based human motion analysis: An overview Computer Vision and Image Understanding(2007)
CSE554AlignmentSlide 1 CSE 554 Lecture 8: Alignment Fall 2013.
Medical Image Analysis Dr. Mohammad Dawood Department of Computer Science University of Münster Germany.
CSE 185 Introduction to Computer Vision Face Recognition.
Computer Vision Hough Transform, PDM, ASM and AAM David Pycock
Multimodal Interaction Dr. Mike Spann
Using simplified meshes for crude registration of two partially overlapping range images Mercedes R.G.Márquez Wu Shin-Ting State University of Matogrosso.
AAM based Face Tracking with Temporal Matching and Face Segmentation Mingcai Zhou 1 、 Lin Liang 2 、 Jian Sun 2 、 Yangsheng Wang 1 1 Institute of Automation.
Course14 Dynamic Vision. Biological vision can cope with changing world Moving and changing objects Change illumination Change View-point.
October 16, 2014Computer Vision Lecture 12: Image Segmentation II 1 Hough Transform The Hough transform is a very general technique for feature detection.
CS Statistical Machine learning Lecture 12 Yuan (Alan) Qi Purdue CS Oct
Point Distribution Models Active Appearance Models Compilation based on: Dhruv Batra ECE CMU Tim Cootes Machester.
Active Appearance Models Dhruv Batra ECE CMU. Active Appearance Models 1.T.F.Cootes, G.J. Edwards and C.J.Taylor. "Active Appearance Models", in Proc.
Affine Registration in R m 5. The matching function allows to define tentative correspondences and a RANSAC-like algorithm can be used to estimate the.
Statistical Models of Appearance for Computer Vision 主講人:虞台文.
CSE 554 Lecture 8: Alignment
Lecture 16: Image alignment
Paper – Stephen Se, David Lowe, Jim Little
University of Ioannina
Fitting Curve Models to Edges
PCA: Hand modelling Nikzad B.Rizvandi.
Lecture 15 Active Shape Models
Active Appearance Models theory, extensions & cases
Introduction to Artificial Intelligence Lecture 22: Computer Vision II
Presentation transcript:

Active Appearance Models based on the article: T.F.Cootes, G.J. Edwards and C.J.Taylor. "Active Appearance Models", presented by Denis Simakov

Mission Image interpretation by synthesis Model that yields photo-realistic objects Rapid, accurate and robust algorithm for interpretation Optimization using standard methods is too slow for realistic parameter models Variety of applications Face, human body, medical images, test animals

Building an Appearance Model

Appearance Appearance = Shape + Texture Shape: tuple of characteristic locations in the image, up to allowed transformation Example: contours of the face up to 2D similarity transformation (translation, rotation, scaling) Texture: intensity (or color) patch of an image in the shape-normalized frame, up to scale and offset of values

Shape Configuration of landmarks Good landmarks – points, consistently located in every image. Add also intermediate points Represent by vector of the coordinates: e.g. x=(x 1,...,x n, y 1,...,y n ) T for n 2D landmarks Configurations x and x' are considered to have the same shape * if they can be merged by an appropriate transformation T (registration) Shape distance – the distance after registration: * Theoretical approach to shape analysis: Ian Dryden (University of Nottingham)

Shape-free texture An attempt to eliminate texture variation due to different shape (“orthogonalization”) Given shape x and a target “normal” shape x' (typically the average one) we warp our image so that points of x move into the corresponding points of x' warped tobecomes

Modeling Appearance training set (annotated grayscale images) shape (tuple of locations)texture (shape-free) model of shapemodel of texture model of appearance PCA

Training set Annotated images Done manually, it is the most human time consuming and error prone part of building the models (Semi-) automatic methods are being developed ** ** A number of references is given in: T.F.Cootes and C.J.Taylor. “Statistical Models of Appearance for Computer Vision”, Feb ; pp * Example from: Active Shape Model Toolkit (for MATLAB), Visual Automation Ltd. *

Training sets for shape and texture models From the initial training set (annotated images) we obtain {x 1,...,x n } – set of shapes, {g 1,...,g n } – set of shape-free textures. We allow the following transformations: S for the shape: translation (t x,t y ), rotation , scaling s. T for the texture: scaling , offset  (T g = ( g –  1 )/  ). Align both sets using these transformations, by minimizing distance between shapes (textures) and their mean Iterative procedure: align all x i (g i ) to the current ( ), recalculate ( ) with new x i (g i ), repeat until convergence.

Examples of training sets Shapes The mean shape Textures * From the work of Mikkel B. Stegmann, Section for Image Analysis, The Technical University of Denmark *

Model of Shape Training set {x 1,...,x n } of aligned shapes Apply PCA to the training set Model of shape: where (the mean shape) and P s (matrix of eigenvectors) define the model; b s is a vector of parameters of the model. Range of variation of parameters: determined by the eigenvalues, e.g. Example: 3 modes of a shape model

Model of Texture Training set {g 1,...,g n } of shape-free normalized image patches Apply PCA Model of texture: Range of variation of parameters Example: 1st mode of a texture model:

Combining two models Joint parameter vector where the diagonal matrix W s accounts for different units of shape and texture parameters. Training set For every pair (x i,g i ) we obtain: Apply PCA to the training set {b 1,...,b n } Model for parameters: b = P c c, P c = [P cs |P cg ] T Finally, the combined model: where Q s = P s W s -1 P cs, Q s = P g P cg.

Examples (combined model) Several modes Color model (by Gareth Edwards) Self-portrait of the inventor His shapeA mode of the modelTim Cootes

Exploiting an Appearance Model

Generating synthetic images: example By varying parameters c in the appearance model we obtain synthetic images:

Active Appearance Model (AAM) Difference vector:  I = I i – I m I i – input (new) image; I m – model-generated (synthetic) image for the current estimation of parameters. Search for the best match Minimize  = |  I| 2, varying parameters of the model Approach: Given: 1)an appearance model, 2)a new image, 3)a starting approximation Find: the best matching synthetic image

Predicting difference of parameters Knowing the matching error  I, we want to obtain information how to improve parameters c Approximate this relation by  c = A  I Precompute A: Include into  c extra parameters: translations, rotations and scaling of shape; scaling and offset of gray levels (texture) Take  I in the shape-normalized frame i.e.  I =  g where textures are warped into the same shape Generate pairs (  c,  g) and estimate A by linear regression.

Checking the quality of linear prediction We can check our linear prediction  c = A  g by perturbing the model Translation along one axis In the multi-resolution model (L0 – full resolution, L1 and L2 – succesive levels of the Gaussian pyramid)

AAM Search Algorithm For the current estimate of parameters c 0 Evaluate the error vector  g 0 Predict displacement of the parameters:  c = A  g 0 Try new value c 1 = c 0 – k  c for k=1 Compute a new error vector  g 1 If |  g 1 |<|  g 0 | then accept c 1 as a new estimate If c 1 was not accepted, try k=1.5; 0.5; 0.25, etc. Iterate the following: until |  g| is no more improved. Then we declare convergence.

AAM search: examples Model of faceModel of hand From the work of Mikkel B. Stegmann, Section for Image Analysis, The Technical University of Denmark

AAM: tracking experiments Extension of AAM By Jörgen Ahlberg, Linköping University AAM Done with AAM-API (Mikkel B. Stegmann)

AAM: measuring performance Model, trained on 88 hand labeled images (about 200 pixels wide), was tested on other 100 images. Convergence rateProportion of correct convergences

View-Based Active Appearance Models based on the article: T.F. Cootes, K.N. Walker and C.J.Taylor, "View-Based Active Appearance Models", presented by Denis Simakov

View-Based Active Appearance Models Basic idea: to account for global variation using several more local models. For example: to model 180  horizontal head rotation exploiting models, responsible for small angle ranges

-40   ±(40  - 60  ) ±(60   ) View-Based Active Appearance Models One AAM (Active Appearance Model) succeeds to describe variations, as long as there no occlusions It appears that to deal with 180  head rotation only 5 models suffice (2 for profile views, 2 for half-profiles, 1 for the front view) Assuming symmetry, we need to build only 3 distinct models

Estimation of head orientation Assumed relation: where  is the viewing angle, c – parameters of the AAM; c 0, c c and c s are determined from the training set For the shape parameters this relation is theoretically justified; for the appearance parameters its adequacy follows from experiments Determining pose of a model instance Given c we calculate the view angle  : where is the pseudo-inverse of the matrix [c c |c s ] T

Tracking through wide angles Match the first frame to one of the models (choose the best). Taking the model instance from the previous frame as the first approximation, run AAM search algorithm to refine this estimation. Estimate head orientation (the angle). Decide if it’s time to switch to another model. In case of a model change, estimate new parameters from the old one, and refine by the AAM search. To track a face in a video sequence: Given several models of appearance, covering together a wide angle range, we can track faces through these angles

Tracking through wide angles: an experiment 15 previously unseen sequences of known people (20-30 frames each). Algorithm could manage 3 frames per second (PIII 450MHz)

Predicting new views Given a single view of a face, we want to rotate it to any other position. Within the scope of one model, a simple approach succeeds: Find the best match c of the view to the model. Determine orientation . Calculate “angle-free” component: c res = c – (c 0 +c c cos(  )+c s sin(  )) To reconstruct view at a new angle , use parameter: c(  ) = c 0 +c c cos(  )+c s sin(  ) + c res

Predicting new views: wide angles To move from one model to another, we have to learn the relationship between their parameters Let c i,j be the “angle-free” component (c res ) of the i’th person in the j’th model (an average one). Applying PCA for every model, we obtain: c i,j = c j +P j b i,j where c j is the mean of c i,j over all people i. Estimate relationship between b i,j for different models j and k by linear regression: b i,j = r j,k +R j,k b i,k Now we can reconstruct a new view:...

Predicting new views: wide angles (cont.) Now, given a view in model 1, we reconstruct view in model 2 as follows: Remove orientation: c i,1 = c – (c 0 +c c cos(  )+c s sin(  )) Project into the space of principle components of the model 1: b i,1 = P 1 T (c i,1 – c 1 ). Move to the model 2: b i,2 = r 2,1 +R 2,1 b i,1. Find the “angle-free” component in the model 2: c i,2 = c 2 +P 2 b i,2. Add orientation: c(  ) = c 0 +c c cos(  )+c s sin(  ) + c i,2.

Predicting new views: example Training set includes images of 14 people. A profile view of unseen person was used to predict half-profile and front views.