Presentation is loading. Please wait.

Presentation is loading. Please wait.

A Generic Method for Evaluating Appearance Models and Assessing the Accuracy of NRR Roy Schestowitz, Carole Twining, Tim Cootes, Vlad Petrovic, Chris.

Similar presentations


Presentation on theme: "A Generic Method for Evaluating Appearance Models and Assessing the Accuracy of NRR Roy Schestowitz, Carole Twining, Tim Cootes, Vlad Petrovic, Chris."— Presentation transcript:

1 A Generic Method for Evaluating Appearance Models and Assessing the Accuracy of NRR
Roy Schestowitz, Carole Twining, Tim Cootes, Vlad Petrovic, Chris Taylor and Bill Crum

2 Overview Motivation Assessment methods overlap-based model-based
Experiments validation comparison of methods practical application Conclusions

3 Motivation for Assessment
Different methods for NRR representation of warp (including regularisation) similarity measure optimisation pair-wise vs group-wise Limitations of current methods of assessment artificial warps (algorithm testing, but not QA) overlap measures (need for ground truth)

4 Overlap-based Assessment
Original image Warp A B Image labels Warp Label overlap tests

5 Model-Based Assessment

6 Model-based Framework
Registered image set  statistical appearance model Good registration  good model generalises well to new examples specific to class of images Registration quality  Model quality problem transformed to defining model quality ground-truth-free assessment of NRR

7 Building an Appearance Model
Shape: x Texture: g Warp to mean shape Raster Scan Align to mean NR R Warp Field Training set Model Statististicalanalysis

8 Training and Synthetic Images
Model Synthetic Image Space

9 Training and Synthetic Images
Model Synthetic NR R Training Image Space Image Space

10 Training and Synthetic Images
Model Synthetic NR R Training Image Space Image Space

11 Training and Synthetic Images
Model Synthetic NR R Training Image Space Image Space

12 Training and Synthetic Images
Model Synthetic NR R Training Image Space Image Space

13 Training and Synthetic Images
Model Synthetic NR R Training Image Space Image Space

14 Training and Synthetic Images
Model Synthetic NR R Training Image Space Image Space

15 Training and Synthetic Images
Model Synthetic NR R Training Image Space Image Space

16 Training and Synthetic Images
Model Synthetic Training Image Space Image Space

17 Training and Synthetic Images
Model Synthetic Generate Training Synthetic Image Space Image Space

18 Training and Synthetic Images
Model Synthetic Generate Training Synthetic Image Space Image Space

19 Training and Synthetic Images
Model Synthetic Generate Training Synthetic Image Space Image Space

20 Training and Synthetic Images
Model Synthetic Generate Training Synthetic Image Space Image Space

21 Training and Synthetic Images
Model Synthetic Generate Training Synthetic Image Space Image Space

22 Training and Synthetic Images
Model Synthetic Generate Training Synthetic Image Space Image Space

23 Training and Synthetic Images
Model Synthetic Generate Training Synthetic Image Space Image Space

24 Training and Synthetic Images
Model Synthetic Training Synthetic Image Space Image Space

25 Training and Synthetic Images
Model Synthetic Training Synthetic Image Space Image Space

26 d can be Euclidean or shuffle distance between images
Model Quality Training Synthetic Given measure d of image distance d can be Euclidean or shuffle distance between images

27 Validation Experiments

28 Experimental Design MGH dataset (37 brains) Selected 2D slice
Initial ‘correct’ NRR Progressive perturbation of registration 10 random instantiations for each perturbation magnitude Comparison of the two different measures overlap model-based

29 Brain Data Eight labels per image L/R white/grey matter
L/R lateral ventricle L/R caudate nucleus wm lv wm lv gm cn gm cn Image LH Labels RH Labels Get rid of the frame!! Unsquash the original and put it on the left so that all the images can be bigger

30 Examples of Perturbed Images
CPS with 1 knot point Example warp Increasing mean pixel displacement

31 Results – Generalised Overlap
Overlap decreases monotonically with misregistration Get rid of the frame/shadow and use colour

32 Results – Model-Based Measures increase monotonically with misregistration Get rid of frames/shadows and use colour

33 Results – Sensitivities
Specificity most sensitive method Get rid of frame and shadow

34 Practical Application – NRR Benchmark

35 Practical Application
3 registration algorithm compared Pair-wise registration Group-wise registration Congealing 2 brain datasets used MGH dataset Dementia dataset 2 assessment methods Model-based (Specificity) Overlap-based

36 Practical Application - Results
Results are consistent Group-wise NRR outperforms pair-wise, which outperforms congealing

37 Extension to 3-D The method was implemented and tested in 3-D
Shuffle neighbourhood to be considered can be a box; cube; plane-based comparison (slice-by-slice); or sphere Validation experiments too laborious to replicate Instead, 4-5 NRR algorithms are compared Ongoing work using annotated IBIM data Results can be validated by measuring label overlap

38 Conclusions Overlap and model-based approaches ‘equivalent’
Overlap provides ‘gold standard’ Specificity is a good surrogate monotonically related robust to noise no need for ground truth only applies to groups (but any NRR method)


Download ppt "A Generic Method for Evaluating Appearance Models and Assessing the Accuracy of NRR Roy Schestowitz, Carole Twining, Tim Cootes, Vlad Petrovic, Chris."

Similar presentations


Ads by Google