Download presentation

Presentation is loading. Please wait.

Published byXavier Dunston Modified over 2 years ago

1
Current Research in Forensic Toolmark Analysis Helping to satisfy the “new” needs of forensic scientists with state of the art microscopy, computation and statistics

2
Outline Introduction Instruments for 3D toolmark analysis 3D toolmark data The statistics: Identification Error Rates “Match” confidence “Match” probability Statistics from available practitioner data

3
All forms of physical evidence can be represented as numerical patterns o Toolmark surfaces o Dust and soil categories and spectra o Hair/Fiber categories and spectra o Craniofacial landmarks o Triangulated fingerprint minutiae Machine learning trains a computer to recognize patterns o Can give “…the quantitative difference between an identification and non-identification” Moran o Can yield identification error rate estimates o May be even confidence measures for I.D.s Quantitative Criminalistics

4
Data Acquisition For Toolmarks Comparison Microscope Confocal Microscope Focus Variation Microscope Scanning Electron Microscope

5
2D profiles 3D surfaces (interactive) Screwdriver Striation Patterns in Lead

6
Bullet base, 9mm Ruger Barrel Bullets Bullet base, 9mm Glock Barrel

7
Close up: Land Engraved Areas

8
Statistical pattern comparison! Modern algorithms are called machine learning Idea is to measure features that characterize physical evidence Train algorithm to recognize “major” differences between groups of features while taking into account natural variation and measurement error. What can we do with all this microscope data?

9
Visually explore: 3D PCA of 760 real and simulated mean profiles of primer shears from 24 Glocks : ~45% variance retained

10
Support Vector Machines Support Vector Machines (SVM) determine efficient association rules In the absence of specific knowledge of probability densities SVM decision boundary

11
Refined bootstrapped I.D. error rate for primer shear striation patterns= 0.35% 95% C.I. = [0%, 0.83%] (sample size = 720 real and simulated profiles) 18D PCA-SVM Primer Shear I.D. Model, 2000 Bootstrap Resamples

12
How good of a “match” is it? Conformal Prediction Vovk Data should be IID but that’s it Cumulative # of Errors Sequence of Unk Obs Vects 80% confidence 20% error Slope = 0.2 95% confidence 5% error Slope = 0.05 99% confidence 1% error Slope = 0.01 Can give a judge or jury an easy to understand measure of reliability of classification result This is an orthodox “frequentist” approach Roots in Algorithmic Information Theory Confidence on a scale of 0%-100% Testable claim: Long run I.D. error- rate should be the chosen significance level

13
Conformal Prediction Theoretical (Long Run) Error Rate: 5% Empirical Error Rate: 5.3% 14D PCA-SVM Decision Model for screwdriver striation patterns For 95%-CPT (PCA-SVM) confidence intervals will not contain the correct I.D. 5% of the time in the long run Straight-forward validation/explanation picture for court

14
An I.D. is output for each questioned toolmark This is a computer “match” What’s the probability the tool is truly the source of the toolmark? Similar problem in genomics for detecting disease from microarray data They use data and Bayes’ theorem to get an estimate How good of a “match” is it? Efron Empirical Bayes’

15
JAGS MCMC Bayesian over-dispersed Poisson with intercept, on test set A Bayesian Hierarchical Model: Believability Curve

16
Bayes Factors/Likelihood Ratios In the “Forensic Bayesian Framework”, the Likelihood Ratio is the measure of the weight of evidence. LRs are called Bayes Factors by most statistician LRs give the measure of support the “evidence” lends to the “prosecution hypothesis” vs. the “defense hypothesis” From Bayes Theorem:

17
Bayes Factors/Likelihood Ratios Using the fit posteriors and priors we can obtain the likelihood ratios Tippett, Ramos Known match LR values Known non-match LR values

18
Two large scale published studies o 10-Barrel Test Hamby : o 626 practitioners (24 countries) o 15 “unknowns” per test set o At least one bullet from each of the 10 consecutively manufactured barrels o # examiner errors committed = 0 o GLOCK Cartridge Case Test Hamby : o 1632 9-mm Glock fired cartridge cases o 1 case per Glock o All cartridge cases pair-wise compared o # of pairs of cartridge cases judged to have enough surface detail agreement to be (falsely) “matching” = 0 o AFTE Theory of Identification standard used: www.swggun.org Available Large Scale Practitioner Studies

19
0% error rate is the “frequentist” estimate o We looked to sports statistics for low scoring games o “Bayesian” statistics provide complementary methods for analysis o Can work much better in estimating small probabilities So does that mean the error rate is 0%? Available Large Scale Practitioner Studies

20
For 10-Barrel we need to estimate a small error rate For GLOCK we need to estimate a small random match probability (RMP) Use Bayesian “Beta-binomial” method when no “failures” are observed (Schuckers) Available Large Scale Practitioner Studies

21
Basic idea of the reverend Bayes: Prior Knowledge × Data = Updated Knowledge + Error Rate/RM P = Posterior ( , | data) Uninf( , ) × Beta-Binomial (data | , ) Get updated estimates of Error rate/RMP Available Large Scale Practitioner Studies

22
So given the observed data and assuming “prior ignorance” o Posterior error rate/RMP distributions: Average Examiner Error Rate 0.011% [0.00023%, 0.040%] RMP 0.000086% [0.0000020%, 0.00031%] Posterior Dist. 10-BarrelPosterior Dist. GLOCK Available Large Scale Practitioner Studies

23
Acknowledgements Professor Chris Saunders (SDSU) Professor Christophe Champod (Lausanne) Alan Zheng (NIST) Research Team: Dr. Martin Baiker Ms. Helen Chan Ms. Julie Cohen Mr. Peter Diaczuk Dr. Peter De Forest Mr. Antonio Del Valle Ms. Carol Gambino Dr. James Hamby Ms. Alison Hartwell, Esq. Dr. Thomas Kubic, Esq. Ms. Loretta Kuo Ms. Frani Kammerman Dr. Brooke Kammrath Mr. Chris Lucky Off. Patrick McLaughlin Dr. Linton Mohammed Mr. Nicholas Petraco Dr. Dale Purcel Ms. Stephanie Pollut Dr. Peter Pizzola Dr. Graham Rankin Dr. Jacqueline Speir Dr. Peter Shenkin Ms. Rebecca Smith Mr. Chris Singh Mr. Peter Tytell Ms. Elizabeth Willie Ms. Melodie Yu Dr. Peter Zoon

Similar presentations

OK

Introduction Osborn. Daubert is a benchmark!!!: Daubert (1993)- Judges are the “gatekeepers” of scientific evidence. Must determine if the science is.

Introduction Osborn. Daubert is a benchmark!!!: Daubert (1993)- Judges are the “gatekeepers” of scientific evidence. Must determine if the science is.

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on sanskritization Ppt on synthesis and degradation of purines and pyrimidines paired Ppt on social media recruitment Ppt on monopolistic business model Ppt on institute management system Ppt on vegetarian and non vegetarian diet Ppt on cognizant technology solutions Ppt on pre-ignition piston Ppt on sports day logo Ppt on anti bullying school policies