© J.D. Opdyke 1 Presented at Joint Statistical Meetings, Miami, Florida, 07/30-08/04, 2011 Bootstraps, Permutation Tests, and Sampling With and Without.

Slides:



Advertisements
Similar presentations
2. Getting Started Heejin Park College of Information and Communications Hanyang University.
Advertisements

Data Structures Through C
Effecting Efficiency Effortlessly Daniel Carden, Quanticate.
Analysis of Computer Algorithms
Chapter 1 The Study of Body Function Image PowerPoint
Subspace Embeddings for the L1 norm with Applications Christian Sohler David Woodruff TU Dortmund IBM Almaden.
1 Copyright © 2013 Elsevier Inc. All rights reserved. Appendix 01.
Introduction to Algorithms 6.046J/18.401J
©2001 by Charles E. Leiserson Introduction to AlgorithmsDay 9 L6.1 Introduction to Algorithms 6.046J/18.401J/SMA5503 Lecture 6 Prof. Erik Demaine.
© 2010 Pearson Addison-Wesley. All rights reserved. Addison Wesley is an imprint of Chapter 5: Repetition and Loop Statements Problem Solving & Program.
© 2010 Pearson Addison-Wesley. All rights reserved. Addison Wesley is an imprint of Chapter 11: Structure and Union Types Problem Solving & Program Design.
Jeopardy Q 1 Q 6 Q 11 Q 16 Q 21 Q 2 Q 7 Q 12 Q 17 Q 22 Q 3 Q 8 Q 13
Year 6 mental test 10 second questions
Overview of Lecture Partitioning Evaluating the Null Hypothesis ANOVA
Chapter 6 File Systems 6.1 Files 6.2 Directories
1 Correlation and Simple Regression. 2 Introduction Interested in the relationships between variables. What will happen to one variable if another is.
Reductions Complexity ©D.Moshkovitz.
Evaluating Window Joins over Unbounded Streams Author: Jaewoo Kang, Jeffrey F. Naughton, Stratis D. Viglas University of Wisconsin-Madison CS Dept. Presenter:
1.
Chapter 7: Arrays In this chapter, you will learn about
Chapter 14 Personal Financial Management
Announcements Homework 6 is due on Thursday (Oct 18)
Addison Wesley is an imprint of © 2010 Pearson Addison-Wesley. All rights reserved. Chapter 10 Arrays and Tile Mapping Starting Out with Games & Graphics.
1 Disks Introduction ***-. 2 Disks: summary / overview / abstract The following gives an introduction to external memory for computers, focusing mainly.
Digital Filter Banks The digital filter bank is set of bandpass filters with either a common input or a summed output An M-band analysis filter bank is.
Effective Test Planning: Scope, Estimates, and Schedule Presented By: Shaun Bradshaw
Database Performance Tuning and Query Optimization
Randomized Algorithms Randomized Algorithms CS648 1.
Recurrences : 1 Chapter 3. Growth of function Chapter 4. Recurrences.
Elementary Statistics
Chapter 17 Linked Lists.
David Luebke 1 6/7/2014 ITCS 6114 Skip Lists Hashing.
Data Structures Using C++
Hash Tables.
INSERT BOOK COVER 1Copyright © 2011 Pearson Education, Inc. Publishing as Prentice Hall. Exploring Microsoft Office Excel 2010 by Robert Grauer, Keith.
Copyright © 2006, SAS Institute Inc. All rights reserved. Think FAST! Use Memory Tables (Hashing) for Faster Merging Gregg P. Snell Data Savant Consulting.
Online Algorithm Huaping Wang Apr.21
5-1 Chapter 5 Theory & Problems of Probability & Statistics Murray R. Spiegel Sampling Theory.
Prerequisites Recommended modules to complete before viewing this module 1. Introduction to the NLTS2 Training Modules 2. NLTS2 Study Overview 3. NLTS2.
1 University of Utah – School of Computing Computer Science 1021 "Thinking Like a Computer"
VOORBLAD.
15. Oktober Oktober Oktober 2012.
Hypothesis Tests: Two Independent Samples
© 2012 National Heart Foundation of Australia. Slide 2.
Copyright © 2013 by John Wiley & Sons. All rights reserved. HOW TO CREATE LINKED LISTS FROM SCRATCH CHAPTER Slides by Rick Giles 16 Only Linked List Part.
© 2010 Pearson Addison-Wesley. All rights reserved. Addison Wesley is an imprint of CHAPTER 7: Recursion Java Software Structures: Designing and Using.
25 seconds left…...
Copyright © Cengage Learning. All rights reserved.
Performance Tuning for Informer PRESENTER: Jason Vorenkamp| | October 11, 2010.
Rational Functions and Models
Statistical Inferences Based on Two Samples
Chapter 10: The Traditional Approach to Design
Copyright © 2010 Pearson Education, Inc. All rights reserved Sec
Systems Analysis and Design in a Changing World, Fifth Edition
©Brooks/Cole, 2001 Chapter 12 Derived Types-- Enumerated, Structure and Union.
Pointers and Arrays Chapter 12
Intracellular Compartments and Transport
PSSA Preparation.
Essential Cell Biology
Choosing an Order for Joins
Testing Hypotheses About Proportions
Simple Linear Regression Analysis
Multiple Regression and Model Building
January Structure of the book Section 1 (Ch 1 – 10) Basic concepts and techniques Section 2 (Ch 11 – 15): Inference for quantitative outcomes Section.
1 Functions and Applications
9. Two Functions of Two Random Variables
4/4/2015Slide 1 SOLVING THE PROBLEM A one-sample t-test of a population mean requires that the variable be quantitative. A one-sample test of a population.
Working Efficiently with Large SAS® Datasets Vishal Jain Senior Programmer.
Presentation transcript:

© J.D. Opdyke 1 Presented at Joint Statistical Meetings, Miami, Florida, 07/30-08/04, 2011 Bootstraps, Permutation Tests, and Sampling With and Without Replacement Orders of Magnitude Faster Using SAS ® *The views presented herein are the views of the sole author, J.D. Opdyke, and do not necessarily reflect the views of other conference participants or discussants. All all calculations and computations were performed by J.D. Opdyke using SAS ®. John Douglas (J.D.) Opdyke*, President DataMineIt,

© J.D. Opdyke 2 1.Objective – Sampling Algorithms for Big Data, all within SAS ® 2.Results – Speed, All Else Equal: No Storage Required 3.Approach: Ask, Which Things is SAS ® Extremely Good At v. Other Packages? 4.Exploiting Specific SAS ® Speed Advantages: a.O(N) is better than O(n) (even when N>>n)??! Huh?! b.Sampling With (OPDY) and Without (OPDN) Replacement c.Keep it in Memory! No I/O d.Linear Time Complexity 5.Competing Approaches within SAS ® : a.The PROCs: NPAR1WAY, SurveySelect, MultTest b.Array-Based Bebbington (1975) c.Hashing (Tables and Iterators) d.DA (Direct Access), Out-SM (Output Sort-Merge), A4.8 (Tille) 6.Conclusions: OPDY/OPDN - Speed (ORDERS OF MAGNITUDE FASTER (80 Sec. vs Hours)) - Scalability (linear time complexity) - Robustness (Hashing and Procs Crash) - Generalizability (bootstrap multivariate models, permute any test statistic) 7.Acknowledgements & References Contents

© J.D. Opdyke 3 1. Objective: Big Data Sampling Algos within SAS ® Specific Objective: Develop sampling algorithms specifically exploiting on SAS ® s fast dataset processing capabilities to implement bootstraps and permutation tests on Big Data without the prohibitive runtime constraints of existing SAS Procs (NPAR1WAY, SurveySelect, Multtest). SPEED GOAL: Orders of Magnitude, all else equal. This is always needed by the likes of large banks and other financial institutions constantly running into the brick walls of runtime constraints: massive amounts of data + computationally intensive methods. Existing methods within SAS ® simply cannot handle it. And SAS ® is arguably faster than any other major statistical software package!

© J.D. Opdyke 4 2. Results: Speed, All Else Equal Relative (Real*) Runtimes: Challengers v. Bootstrap OPDY & Permutation Test OPDN N (per Stratum) #Stratan = mChallengervs. OPDYvs. OPDY_Boot_FT1 10,000, SurveySelect 218.3x990.0x 10, Hash Table + Hash Iterator 24.3x28.9x 10, Hash Table + Hash Iterator Challenger Crashed vs. OPDNvs. OPDN_Perm_FT1 7,500, SurveySelect 242.0x530.0x 1,000, MultTest 685.1x5,970.0x 7,500, NPAR1WAY 353.0x400.0x 10,000, NPAR1WAY Challenger Crashed 7,500, Simultaneous Bebbington 201.0x566.0x *Relative CPU runtimes were very similar and are reported with complete simulation results in Opdyke (2010) and Opdyke (2011). OPDY_Boot_FT1 and OPDY_Perm_FT1 are the proprietary versions of published OPDY and OPDN.

© J.D. Opdyke 5 Runtimes in Absolute Terms: OPDY_Boot_FT1 Bootstraps in 78 seconds where Proc SurveySelect Bootstraps in 21.5 hours. OPDN_Perm_FT1 Conducts Permutation Tests in under 2 minutes where Proc MultTest Permutes in over 1 week. 2. Results: Speed, All Else Equal

© J.D. Opdyke 6 3. Approach: Ask, Which Things is SAS ® the Best At? SAS ® is VERY FAST Compared to Other Statistical Packages at 3 Things: 1.Reading in Datasets 2.Retaining Data Values Across Records 3.Looping on a Specific Record So Design Sampling Algorithms that Exploit These Advantages!

© J.D. Opdyke 7 Combine 1. and 2. in order to fill an array and accomplish 3. USING TEMPORARY ARRAYS! 1.There is no faster way to fill an array in SAS ®, from scratch, than to read-in values as the dataset is read in, record by record (things like Proc Transpose are SLOW and crash on large arrays). 2.TEMPORARY Arrays retain values across records automatically and save HUGE amounts of memory by avoiding assigning names to all the array cells 3.For YEARS NOW, TEMPORARY ARRAYS HAVE HAD NO 32,000 CELL/VARIABLE LIMIT! Only 2GB RAM allowed 125 million cells! 4.Also, NO STORAGE REQUIRED!! So no storage constraints, and no I/O so MUCH FASTER EXECUTION. 3. Approach: Ask, Which Things is SAS ® the Best At?

© J.D. Opdyke 8 So once a dataset (column) has efficiently spilled into a TEMPORARY array (row), perform calculations with FAST LOOPING. 1.Since TEMPORARY arrays canNOT be saved to dataset, the users will never crash the code accidentally 2.Each column can be read-in by BY VARIABLE combinations, and the calculation values saved at the BY VARIABLE level 3.This is FASTER than DOW looping (see Dorfman, and Opdyke, 2011). 4.When CIs for bootstraps or p-values for permutation tests need to be calculated and saved, use TEMPORARY arrays to hold the m statistics from the m samples, and calculate either by looping on the m cells of the TEMPORARY arrays. The only thing that needs be saved is the final p-value/confidence interval (CI). 3. Approach: Ask, Which Things is SAS ® the Best At?

© J.D. Opdyke 9 Note that in SAS ®, if minimizing real runtimes are the practical concern, then often O(N) algos are better than (faster than) O(n) algos, even when N>>n! Why? Because its MUCH faster to simply read-in the entire dataset than it is to try to sub-select specific records. So theoretical time complexity alone should not drive algorithm development and implementation in SAS ®. 3. Approach: Ask, Which Things is SAS ® the Best At?

© J.D. Opdyke 10 Sampling With Replacement - OPDY: Once the large TEMPORARY array is efficiently filled, simply sample with replacement using using OPDY (One-Pass, Duplicates? Yes). For example, a bootstrap on means simply would be: array bmeans{ } _TEMPORARY_; array temp{ } _TEMPORARY_; do m=1 to num_bsmps; x=0; do n=1 to ; x = temp[floor(ranuni(-1)*freq) + 1] + x; end; bmeans[m] = x/ ; end; 4. Exploiting the SAS ® Speed Advantages

© J.D. Opdyke 11 Keeping the bootstrap values in a TEMPORARY array can speed things up by several multiples, and the final CIs and/or bootstrap statistic can be output, or saved in macro variables if using a data _null_ (which further speeds things up as less memory is held aside for a dataset to be output in a data step). 4. Exploiting the SAS ® Speed Advantages

© J.D. Opdyke 12 Sampling Without Replacement - OPDN: Once the large TEMPORARY array is efficiently filled, simply sample with replacement using OPDN (One- Pass, Duplicates? No). Goodman & Hedetniemi (1982) is PERFECT for this purpose, but not noted in the statistics literature (for example, it is not cited in Tillé (2006), an authoritative statistical sampling source. But Pesarin (2000) does identify it, if not cite the source.) An example for a permutation test of the mean is presented below, with two versions of implementation. 4. Exploiting the SAS ® Speed Advantages

© J.D. Opdyke 13 1.DO m = 1 to #permutation tests 2. x 0 3. tot_FREQ_hold # records in current stratum 4. tot_FREQ tot_FREQ_hold 5. do n = 1 to # records in smaller of Control and Treatment samples 6. cell uniform random variate on 1 to tot_FREQ 7. x temp[cell] + x 8. hold temp[cell] 9. temp[cell] temp[tot_FREQ] 10. temp[tot_FREQ] hold 11. tot_FREQ tot_FREQ end; 13. psums[m] x 14.END; OPDN implementation #1 of Goodman & Hedetniemi (1982) for Permutation Tests: *** temp[ ] is the array filled with all the data values, for current stratum, of the variable being permuted *** psums[ ] is the array containing the permutation sample statistic values for every permutation sample 4. Exploiting the SAS ® Speed Advantages

© J.D. Opdyke 14 1.DO m = 1 to #permutation tests 2.tot_FREQ_hold # records in current stratum 3. tot_FREQ tot_FREQ_hold 4. do n = 1 to # records in smaller of Control and Treatment samples 5. cell uniform random variate on 1 to tot_FREQ 6.hold temp[cell] 7. temp[cell] temp[tot_FREQ] 8. temp[tot_FREQ] hold 9. tot_FREQ tot_FREQ end; 11. psums[m] sum(temp[tot_FREQ] to temp[tot_FREQ_hold]) 12.END; OPDN implementation #2 of Goodman & Hedetniemi (1982) for Permutation Tests: *** temp[ ] is the array filled with all the data values, for current stratum, of the variable being permuted *** psums[ ] is the array containing the permutation sample statistic values for every permutation sample 4. Exploiting the SAS ® Speed Advantages

© J.D. Opdyke 15 Note that Goodman & Hedetniemi (1982) uses only a single array for ALL the permutation samples: since the initial order of the array cells doesnt matter for random sampling, the array is left as it was from the previous sample when beginning to sample the next permutation sample. This is an extremely efficient use of the large TEMPORARY array containing the entire dataset/strata of values. 4. Exploiting the SAS ® Speed Advantages

© J.D. Opdyke 16 Keep it in Memory: No I/O: Note that neither OPDY nor OPDN write to disk: the entire algorithm is executed in memory, which GREATLY increases execution speed. And SAS ® s memory management of TEMPORARY arrays is second to none: OPDY and OPDN can handle datasets (technically, strata size) orders of magnitude larger than can Proc NPAR1WAY AND Hash Tables, both of which crash on datasets OPDY and OPDN handle easily. With only 2GB RAM, TEMPORARY arrays of 125 million cells can be used in SAS ® v.9.2 before crashing. Finally, both algorithms are SCALABLE, as the time complexity of both is linear runtime (see Graphs 1 and 2 below). This is not the case for Proc MultTest, Proc NPAR1WAY, or Proc SurveySelect. 4. Exploiting the SAS ® Speed Advantages

© J.D. Opdyke 17 Log10(Real Runtime) = * Log10(N*n*m) (where N = all N across strata) (1) Graph 1: OPDY Real Runtime by N*n*m (N = all strata) 4. Exploiting the SAS ® Speed Advantages

© J.D. Opdyke 18 Log10(Real Runtime) = * Log10(N*n*m) (where N = all N across strata) (2) Graph 2: OPDN Real Runtime by N*n*m (N = all strata) 4. Exploiting the SAS ® Speed Advantages

© J.D. Opdyke 19 For bootstraps: Proc SurveySelect A4.8, cited in Tillé (2006), proof derived in Opdyke (2011) Hash Table with Hash Iterator DA (direct-access) Out-SM (output, sort, merge) For permutation tests: The Procs: SurveySelect, NPAR1WAY, MultTest Array-based Bebbington (1975) 5. Competing Approaches within SAS ®

© J.D. Opdyke 20 Bebbington (1975): 1.Initialize: Let i 0, N N + 1, n n 2.i i If n = 0, STOP Algorithm 4.Visit Data Record i 5.N N – 1 6.Generate Uniform Random Variate u ~ Uniform[0, 1] 7.If u > n / N, Go To 2. Otherwise,Output Record i into Sample n n – 1 Go To MLE vs. Robust Statistics: Point-Counterpoint Execute the algorithm above m times simultaneously, on each record, using an m-dimensional array.

© J.D. Opdyke 21 A4.8 (cited in Tillé, 2006, proof provided in Opdyke, 2011): 1.Initialize: Let i 0, N N + 1, n n 2.i i If n = 0, STOP Algorithm 4.Visit Data Record i 5.N N – 1 6.Generate Binomial Random Variate b ~ Binomial(n, p 1/N )* 7.If b = 0, Go To 2. Otherwise,Output Record i into Sample b times n n – b Go To MLE vs. Robust Statistics: Point-Counterpoint * Using A4.8, p will never equal zero. If p = 1 (meaning the end of the stratum (dataset) is reached and i = N, N = 1, and n = 0) before all n items are sampled, the rand function b=rand(binomial,p,n) in SAS® assigns a value of n to b, which is correct for A4.8. Execute the algorithm above m times simultaneously, on each record, using an m-dimensional array.

© J.D. Opdyke 22 The only difference between Bebbington (1975) and A4.8 is the density determining whether, and the number of times, the observation is selected: the former, for sampling without replacement, selects at most one time using a uniform pseudo-random number generator; the latter, for sampling with replacement, selects zero or more times using a binomial pseudo-random number generator. Bebbington is slightly more competitive because the uniform pseudo-random number generator is faster than the binomial pseudo-random number generator, which prevents A4.8 from being a viable competitor. 5. Competing Approaches within SAS ®

© J.D. Opdyke 23 Hashing is fast, because it is memory-based, but it runs into memory constraints, crashing on datasets/strata well under an order of magnitude smaller than those OPDY can handle. DA is an aging, very slow, essentially obsolete method: the POINT= Direct Access option on the SET statement can be used on small datsets, but it is not viable for modern, computationally intensive methods requiring large amounts of resampling. Out-SM is inadequate, too, but for different reasons: it becomes prohibitively slow because of all the I/O required. Outputting large numbers of bootstrap-sample observations, sorting them, and then merging them back on to the original data by observation id# is slow, unwieldy and resource-intensive. So DA, and especially Out-SM, are essentially useless under Big Data conditions; Hashing is fast, but cannot scale to Big Data, either. 5. Competing Approaches within SAS ®

© J.D. Opdyke 24 Proc SurveySelect is surprisingly slow, given that it is a relatively new procedure. Proc NPAR1WAY is faster, due to more efficient use of memory, but the price it pays is the same tradeoff as Hashing: it crashes on datasets/strata well under an order of magnitude smaller than those OPDY can handle. And Proc MultTest, the oldest of the three, is also the slowest of the three, because it is much more I/O intensive. 5. Competing Approaches within SAS ®

© J.D. Opdyke 25 Note that while OPDY and OPDN execute on datasets/strata much larger than Hashing and Proc NPAR1WAY can handle, they actually have theoretically unlimited dataset size: they are only limited by the size of the largest stratum in the dataset. So if a truly massive dataset was comprised of a large number of strata with fairly large, but not massive numbers of observations in each, the other methods would fail, but OPDY and OPDN would not. 5. Competing Approaches within SAS ®

© J.D. Opdyke 26 SCALABLE: No other algorithms or Procs in SAS ® are at all scalable as are OPDY and OPDN for executing Bootstraps and Permutation Tests on Big Data. FASTER: They are both ORDERS OF MAGNITUDE FASTER than all other algorithms/Procs when datasets are at least of modest size, which is the only time that speed matters anyway. MORE ROBUST: And both can handle strata orders of magnitude larger than all other methods before those either crash, or become prohibitively slow, since they do not have linear time complexity as do OPDY and OPDN. Theoretically, dataset size is unlimited for these algorithms. 6. Conclusions

© J.D. Opdyke 27 GENERALIZABILITY: Both OPDY and OPDN also are completely generalizable: OPDY can execute bootstraps on multivariate models (DataMineIt has a version of OPDY_Boot_FT1 that does this), and OPDN can be modified to execute permutation tests using virtually any test statistic. No other algorithms/Procs in SAS can handle the challenge of applying computationally intensive resampling methods to BIG DATA as do OPDY and OPDN (and their proprietary versions, OPDY_Boot_FT1 and OPDN_Perm_FT1) 6. Conclusions

© J.D. Opdyke Results: Speed, All Else Equal Relative (Real*) Runtimes: Challengers v. Bootstrap OPDY & Permutation Test OPDN N (per Stratum) #Stratan = mChallengervs. OPDYvs. OPDY_Boot_FT1 10,000, SurveySelect 218.3x990.0x 10, Hash Table + Hash Iterator 24.3x28.9x 10, Hash Table + Hash Iterator Challenger Crashed vs. OPDNvs. OPDN_Perm_FT1 7,500, SurveySelect 242.0x530.0x 1,000, MultTest 685.1x5,970.0x 7,500, NPAR1WAY 353.0x400.0x 10,000, NPAR1WAY Challenger Crashed 7,500, Simultaneous Bebbington 201.0x566.0x *Relative CPU runtimes were very similar and are reported with complete simulation results in Opdyke (2010) and Opdyke (2011). OPDY_Boot_FT1 and OPDY_Perm_FT1 are the proprietary versions of published OPDY and OPDN.

© J.D. Opdyke References Bebbington, A. (1975), A Simple Method of Drawing a Sample Without Replacement, Journal of the Royal Statistical Society, Series C (Applied Statistics), Vol. 24, No. 1, 136. Dorfman, P., The DOW-Loop Unrolled, Paper BB-13. Goodman, S. & S. Hedetniemi (1977), Introduction to the Design and Analysis of Algorithms, McGraw-Hill, New York. Opdyke, J.D. (2010), Much Faster Bootstraps Using SAS ®, InterStat, October, Opdyke, J.D. (2011), Permutation Tests (and Sampling Without Replacement) Orders of Magnitude Faster Using SAS ®, InterStat, January, Pesarin, F. (2001), Multivariate Permutation Tests with Applications in Biostatistics, John Wiley & Sons, Ltd., New York. Tillé, Y. (2006), Sampling Algorithms, New York, NY, Springer.

© J.D. Opdyke 30 I sincerely thank Nicole Ann Johnson Opdyke and Toyo Johnson for their support and belief that SAS ® could produce a better bootstrap and a better permutation test. Acknowledgments:

© J.D. Opdyke 31 Providing statistical consulting and risk analytics to the banking, credit, and consulting sectors. J.D. Opdyke President, DataMineIt