Download presentation

Presentation is loading. Please wait.

Published byStephon Creswell Modified about 1 year ago

1
Consistent Assessment of Biomarker and Subgroup Identification Methods H.D. Hollins Showalter 5/20/2014 (MBSW) 1

2
Outline 1.Background 2.Data Generation 3.Performance Measurement 4.Example 5.Operationalization 6.Conclusion 5/20/2014 (MBSW) 2

3
Outline 1.Background 2.Data Generation 3.Performance Measurement 4.Example 5.Operationalization 6.Conclusion 5/20/2014 (MBSW) 3

4
Tailored Therapeutics A medication for which treatment decisions are based on the molecular profile of the patient, the disease, and/or the patient’s response to treatment. A tailored therapeutic allows the sponsor to make a regulatory approved claim of an expected treatment effect (efficacy or safety) “Tailored therapeutics can significantly increase value—first, for patients—who achieve better outcomes with less risk and, second, for payers— who more frequently get the results they expect.”* 5/20/2014 (MBSW) 4 *Opening Remarks at 2009 Investor Meeting, John C. Lechleiter, Ph.D. Adapted from slides presented by William L. Macias, MD, PhD, Eli Lilly

5
Achieving Tailored Therapeutics Data source: clinical trials (mostly) Objective: identify biomarkers and subgroups Challenges: complexity, multiplicity Need: modern statistical methods 5/20/2014 (MBSW) 5

6
Prognostic vs. Predictive Markers Prognostic Marker Single trait or signature of traits that identifies different groups of patients with respect to the risk of an outcome of interest in the absence of treatment Predictive Marker Single trait or signature of traits that identifies different groups of patients with respect to the outcome of interest in response to a particular treatment 5/20/2014 (MBSW) 6

7
Statistical Interactions Marker Response - + No treatment Treatment Marker Effect Treatment Treatment Effect Treatment by Marker Effect Y = 0 + 1 *M + 2 *T + 3 *M*T + 5/20/2014 (MBSW) 7

8
Types of Predictive Markers Marker Response - + No treatment Treatment Marker Response - + No treatment Treatment Marker Response - + No treatment Treatment Marker Response - + No treatment Treatment 5/20/2014 (MBSW) 8

9
Entire Population Subgroup of Interest Group size: 50% M+ Trt response: Pl response: Treatment effect: x 1 = 1x 1 = 0 M− Entire Population Group size: 25% M+ Trt response: Pl response: Treatment effect: x 1 = 1x 1 = 0 M− x 2 = 1 x 2 = 0 Trt response: Pl response: Treatment effect: Predictive Marker Example 5/20/2014 (MBSW) 9 Subgroup of Interest Group size: 75% Trt response: Pl response: Treatment effect: -0.1

10
BSID vs. “Traditional” Analysis Traditional subgroup analysis o Interaction testing, one at a time o Many statistical issues o Many gaps for tailoring Biomarker and subgroup identification (BSID) o Utilizes modern statistical methods o Addresses issues with subgroup analysis o Maximizes tailoring opportunities 5/20/2014 (MBSW) 10

11
Simulation to Assess BSID Methods Objective Consistent, rigorous, and comprehensive calibration and comparison of BSID methods Value Further improve methodology o Identify the gaps (where existing methods perform poorly) o Synergy/combining ideas from multiple methods Optimize application for specific clinical trials 5/20/2014 (MBSW) 11

12
BSID Simulation: Three Components 1.Data generation o Key is consistency 2.BSID o “Open” and comprehensive application of analysis method(s) 3.Performance measurement o Key is consistency 5/20/2014 (MBSW) 12

13
Performance Measurement BSID Data Generation BSID Simulation: Visual Representation 5/20/2014 (MBSW) 13 Dataset 1 Results 1 Performance Metrics 1 Overall Performance Metrics Truth Dataset 2 Results 2 Performance Metrics 2 Dataset … Results … Performance Metrics … Dataset n Results n Performance Metrics n

14
Outline 1.Background 2.Data Generation 3.Performance Measurement 4.Example 5.Operationalization 6.Conclusion 5/20/2014 (MBSW) 14

15
Data Generation Creating virtual trial data o Make assumptions in order to emulate real trial data o Knowledge of disease and therapies, including historical data o Specific to BSID: must embed markers and subgroups In order to measure the performance of BSID methodology the “truth” is needed o This is challenging/impossible to discern using real trial data 5/20/2014 (MBSW) 15

16
Data Generation Survey 5/20/2014 (MBSW) 16 AttributeSIDES (2011) 1 SIDES (2014) 2 VT 3 GUIDE 4 QUINT 5 IT 6 n , , 450 p response type continuous binary continuousTTE predictor type binary continuouscategoricalcontinuous ordinal, categorical predictor correlation 0, 0.30, 0.20, 0.700, 0.20 treatment assignment 1:1 ?~1:1 ? # predictive markers , , 2 predictive effect(s) higher order N/A, simple, higher order simple, higher order simple predictive M+ group size (% of n) 15% - 20%50%N/A, ~25%, ~50%N/A, ~36%~16% - ~50%N/A, ~25%, ? # prognostic markers , 2 prognostic effect(s) N/A simple, higher order N/A, simple, higher order simple, higher order simple “contribution model” logit model (w/o and with subject- specific effects linear model (on probability scale) “tree model” exponential model

17
Data Generation: Recommendations Clearly identify attributes and models o Transparency o Traceability of analysis Make sure to capture the “truth” in a way that facilitates performance measurement Derive efficiency and synergistic value (more on this later!) 5/20/2014 (MBSW) 17

18
Data Generation: Specifics Identify key attributes o Sample size o Number of predictors o Response type o Predictor type/correlation o Subgroup size o Sizes of effects: placebo response, overall treatment effect, predictive effect(s), prognostic effect(s) o Others: Missing data, treatment assignment Specify model 5/20/2014 (MBSW) 18

19
Data Generation: Recommendations Clearly identify attributes and models o Transparency o Traceability of analysis Make sure to capture the “truth” in a way that facilitates performance measurement Derive efficiency and synergistic value (more on this later!) 5/20/2014 (MBSW) 19

20
Data Generation: Reqs Format data consistently Make code flexible enough to accommodate any/all attributes and models Ensure that individual datasets can be reproduced (i.e., various seeds for random number generation) The resulting dataset(s) should always have the same look and feel 5/20/2014 (MBSW) 20

21
Outline 1.Background 2.Data Generation 3.Performance Measurement 4.Example 5.Operationalization 6.Conclusion 5/20/2014 (MBSW) 21

22
Performance Measurement Quantifying the ability of BSID methodology to recapture the “truth” underlying the (generated) data If done consistently, allows calibration and comparison of BSID methods 5/20/2014 (MBSW) 22

23
Performance Measurement: Survey 5/20/2014 (MBSW) 23 SIDES (2011) 1 VT 3 GUIDE 4 QUINT 5 SIDES (2014) 2 IT 6 Selection rate Complete match rate Partial match rate Confirmation rate Treatment effect fraction Pr(complete match) Pr(partial match) Pr(selecting a subset) Treatment effect fraction (updated def.) Pr(selecting a superset) Finding correct X’s Power Pr(selection at 1 st or 2 nd level splits of trees) Accuracy Pr(nontrivial tree) (RP1a) Pr(type I errors) (RP1b) Pr(type II errors) (RP2) Rec. of tree complexity (RP4) Rec. of assignments of observations to partition classes (RP3) Rec. of splitting vars and split points. Frequencies of the final tree sizes Bias assessment via likelihood ratio and logrank tests Frequency of (predictor) “hits”

24
Marker Level Subject Level Subgroup Level Performance Measurement: Recommendations 5/20/2014 (MBSW) 24 predictionestimation testing

25
SIDES (2011) 1 VT 3 GUIDE 4 QUINT 5 SIDES (2014) 2 IT 6 Selection rate Complete match rate Partial match rate Confirmation rate Treatment effect fraction Pr(complete match) Pr(partial match) Pr(selecting a subset) Treatment effect fraction (updated def.) Pr(selecting a superset) Finding correct X’s Power Pr(selection at 1 st or 2 nd level splits of trees) Accuracy Pr(nontrivial tree) (RP1a) Pr(type I errors) (RP1b) Pr(type II errors) (RP2) Rec. of tree complexity (RP4) Rec. of assignments of observations to partition classes (RP3) Rec. of splitting vars and split points. Frequencies of the final tree sizes Bias assessment via likelihood ratio and logrank tests Frequency of (predictor) “hits” Perf. Measurement: Survey Revisited 5/20/2014 (MBSW) 25 SIDES (2011) 1 VT 3 GUIDE 4 QUINT 5 SIDES (2014) 2 IT 6 Selection rate Complete match rate Partial match rate Confirmation rate Treatment effect fraction Pr(complete match) Pr(partial match) Pr(selecting a subset) Treatment effect fraction (updated def.) Pr(selecting a superset) Finding correct X’s Power Pr(selection at 1 st or 2 nd level splits of trees) Accuracy Pr(nontrivial tree) (RP1a) Pr(type I errors) (RP1b) Pr(type II errors) (RP2) Rec. of tree complexity (RP4) Rec. of assignments of observations to partition classes (RP3) Rec. of splitting vars and split points. Frequencies of the final tree sizes Bias assessment via likelihood ratio and logrank tests Frequency of (predictor) “hits” Marker LevelSubgroup LevelSubj. Level (testing) (estimation) (prediction)

26
Contingency Table: Marker Level 5/20/2014 (MBSW) 26 Predictive Biomarker Identified as Predictive TrueFalse No Yes True Positive False Positive True Negative False Negative Sensitivity = True Positive / True Predictive Biomarkers Specificity = True Negative / False Predictive Biomarkers PPV = True Positive / Identified as Predictive NPV = True Negative / Not Identified as Predictive

27
Performance Measures: Marker Level # and % of predictors: true vs. identified Sensitivity Specificity PPV NPV 5/20/2014 (MBSW) 27

28
Performance Measures: Subgroup Level Size of identified subgroup Treatment effect in the identified subgroup o Average the true “individual” treatment effects under potential outcomes framework Accuracy of estimated treatment effect o Difference (both absolute and direction) between estimate and true effect 5/20/2014 (MBSW) 28

29
Perf. Measures: Subgroup Level, cont. Implications on sample size/time/cost of future trials o Given true treatment effect, what is the number of subjects needed in the trial for 90% power? o What is the cost of the trial? (mainly driven by # enrolled) o How much time will the trial take? (mainly driven by # screened) 5/20/2014 (MBSW) 29

30
Contingency Table: Subject Level 5/20/2014 (MBSW) 30 Membership Classification Potential to Realize Enhanced Treatment Effect* TrueFalse M- M+ True Positive False Positive True Negative False Negative Sensitivity = True Positive / True Enhanced Treatment Effect Specificity = True Negative / False Enhanced Treatment Effect PPV = True Positive / Classified as M+ NPV = True Negative / Classified as M- *at a meaningful or desired level

31
Performance Measures: Subject Level Compare subgroup membership on the individual level: true vs. identified Sensitivity Specificity PPV NPV 5/20/2014 (MBSW) 31

32
Conditional Performance Measures Same metrics with Null submissions removed Markers/subgroups can be very difficult to find. When a method DOES find something, how accurate is it? Hard(er) to compare multiple methods when all performance measures are washed out by Null submissions 5/20/2014 (MBSW) 32

33
Cond. Subgroup Level Measures Example 5/20/2014 (MBSW) 33 M+ Treatment effect: 10 x 1 = 1x 1 = 0 M− Treatment effect: 0 Group size: 50% x 2 = 1 x 2 = 0 BSID Method A 900/1000: Null 100/1000: x 1 = 1 Truth (but x 1 very hard to find) 1000 simulations BSID Method B 900/1000: Null 50/1000: x 1 = 1 50/1000: x 2 = 1 Unconditional Size: 0.95 Effect: 5.5 Unconditional Size: 0.95 Effect: 5.25 Conditional Size: 0.5 Effect: 10 Conditional Size: 0.5 Effect: 7.5 Group size: 50%

34
Performance Measurement: Reqs For each application of BSID user proposes: List of predictive biomarkers The one subgroup for designing the next study Estimated treatment effect in this subgroup In conjunction with the “truth” underlying the generated data, all of the recommended performance measures can be calculated using these elements 5/20/2014 (MBSW) 34

35
Considering the “Three Levels” What are the most important and relevant measures of a result? Depends on the objective… 5/20/2014 (MBSW) 35 Marker Level Invest further in the marker(s) Subgroup Level Tailor the next study/design Subject Level Impact in clinical practice

36
Outline 1.Background 2.Data Generation 3.Performance Measurement 4.Example 5.Operationalization 6.Conclusion 5/20/2014 (MBSW) 36

37
Data Generation Example 5/20/2014 (MBSW) 37 AttributeValue simulations (datasets)200 n240 p20 response type predictor typeordinal (“genetic”) predictor correlation0 treatment assignment1:3 (pl:trt) placebo response-0.1 (in weakest responding subgroup) treatment effect-0.1 (in weakest responding subgroup) # predictive markers1 predictive effect size(s) (type)-0.45 (dominant) predictive M+ group size~50% of n # prognostic markers0 prognostic effect size(s)N/A linear model

38
Data Generation Example, cont. 5/20/2014 (MBSW) 38

39
Data Generation Example, concl. 5/20/2014 (MBSW) 39 Dataset 1 Trt 0: Trt 1: Effect: x_1_1_1 Dataset 21 Trt 0: Trt 1: Effect: x_1_1_1

40
BSID Methods Applied to Example 5/20/2014 (MBSW) 40 Alpha controlled at 0.1 ApproachTraditionalVirtual Twin 3 TSDT 7 Handling treatment-by- subgroup interaction ModelTransformationSequential Searching for candidate subgroups ExhaustiveRecursive Partitioning Addressing multiplicity Simple (Sidak Correction) PermutationSub-sampling + Permutation

41
Performance Measurement Example 5/20/2014 (MBSW) 41 Truth Proposal + = Performance Measures

42
Perf. Measurement Example, cont. 5/20/2014 (MBSW) 42

43
Perf. Measurement Example, concl. 5/20/2014 (MBSW) 43 MeasureTraditionalVirtual Twin 3 TSDT 7 Marker LevelUncond.Cond.Uncond.Cond.Uncond.Cond. Sensitivity Specificity PPV NPV Subgroup Level Non-Identification (Null)89%78%58% Subgroup Size93.6%41.4%88.8%48.9%79.2%50.4% Trt Effect in Subgroup Subject Level Sensitivity Specificity PPV NPV

44
Outline 1.Background 2.Data Generation 3.Performance Measurement 4.Example 5.Operationalization 6.Conclusion 5/20/2014 (MBSW) 44

45
Strategy Develop framework (done/ongoing) Present/get input (current) o Internal and external forums o Workshop Establish an open environment (future) o R package on CRAN o Web portal repository 5/20/2014 (MBSW) 45

46
Predictive Biomarker Project: Vision Access Web Portal o Reads open description (objective, models, formats etc.) Access web interface for Data Generation o Generate data under specified scenarios, or utilize “standard”/pre-existing scenarios Apply BSID methodology to datasets o Express results in the specified format Access web interface for Performance Measurement o Compare performance Encouraged to contribute to Repository o Open sharing of results, descriptions, programs 5/20/2014 (MBSW) 46

47
Pros and Cons Pros More convenient and useful simulation studies to aid research Direct comparisons of performance by methods Optimization of methods for relevant and important scenarios for drug development New insights and collaborations Data sets could be applied for other statistical problems Cons Need to develop infrastructure to support simulated data Access and upkeep Need experts to explicitly define the scope 5/20/2014 (MBSW) 47

48
Outline 1.Background 2.Data Generation 3.Performance Measurement 4.Example 5.Operationalization 6.Conclusion 5/20/2014 (MBSW) 48

49
Conclusion Simulation studies are a common approach to assessing BSID methods but there is a lack of consistency in data generation and performance measurement The presented framework enables consistent, rigorous, comprehensive calibration and comparison of BSID methods Collaborating on this effort will result in efficiency and synergistic value 5/20/2014 (MBSW) 49

50
Acknowledgements Richard Zink Lei Shen Chakib Battioui Steve Ruberg Ying Ding Michael Bell 5/20/2014 (MBSW) 50

51
References 1.Lipkovich I, Dmitrienko A, Denne J, Enas, G. Subgroup identification based on differential effect search — a recursive partitioning method for establishing response to treatment in patient subpopulations. Statistics in Medicine 2011; 30:2601–2621. doi: /sim Lipkovich I, Dmitrienko A. Strategies for Identifying Predictive Biomarkers and Subgroups with Enhanced Treatment Effect in Clinical Trials Using SIDES. Journal of Biopharmaceutical Statistics 2014; 24: doi: / Foster JC, Taylor JMG, Ruberg SJ. Subgroup identification from randomized clinical trial data. Statistics in Medicine 2011; 30:2867–2880. doi: /sim Loh, WY, He X, Man M. A regression tree approach to identifying subgroups with differential treatment effects. Presented at Midwest Biopharmaceutical Statistics Workshop Dusseldorp E, Van Mechelen I. Qualitative interaction trees: a tool to identify qualitative treatment- subgroup interactions. Statistics in Medicine 2014; 33:219–237. doi: /sim Su X, Zhou T, Yan X, Fan J, Yang S. Interaction trees with censored survival data. International Journal of Biostatistics 2008; 4(1):Article 2. doi: / Battioui C, Shen L, Ruberg S. A Resampling-based Ensemble Tree Method to Identify Patient Subgroups with Enhanced Treatment Effect. Proceedings of the 2013 Joint Statistical Meetings. 8.Zink R, Shen L, Wolfinger R, Showalter H. Assessment of Methods to Identify Patient Subgroups with Enhanced Treatment Response in Randomized Clinical Trials. Presented at the 2013 ICSA Applied Statistical Symposium. 9.Shen L, Ding Y, Battioui C. A Framework of Statistical Methods for Identification of Subgroups with Differential Treatment Effects in Randomized Trials. Presented at the 2013 ICSA Applied Statistical Symposium. 5/20/2014 (MBSW) 51

52
Backup Slides 5/20/2014 (MBSW) 52

53
Data Generation: SIDES (2011) 1 5/20/2014 (MBSW) 53 AttributeValue simulations (datasets)5000 n900 (then divided into 3 equal – 1 training, 2 test) p5, 10, 20 response type predictor typebinary (dichotomized from continuous) predictor correlation0, 0.3 treatment assignment1:1 (pl:trt) placebo response0 treatment effect0 # predictive markers0, 1, 2, 3* predictive effect size(s)not explicitly stated predictive M+ group size15% - 20% of n (but not explicitly stated) # prognostic markers0 prognostic effect size(s)N/A model“contribution model”

54
Data Generation: SIDES (2014) 2 5/20/2014 (MBSW) 54 AttributeScenario 1Scenario 2Scenario 3Scenario 4 simulations (datasets)10000 n p20, 60, 100 response type predictor typebinary (dichotomized from continuous) predictor correlation00.2*0 treatment assignment1:1 (pl:trt) placebo response0000 treatment effect0000 # predictive markers2** predictive effect size(s) predictive M+ group size0.5 * n = * n = 450 # prognostic markers0000 prognostic effect size(s)N/A model“contribution model”

55
Data Generation: Virtual Twins 3 5/20/2014 (MBSW) 55 AttributeNullBaseModifications* simulations (datasets)100 n and 2000 p15 30 response typebinary predictor type predictor correlation000.7** treatment assignment?? placebo response treatment effect0.1 # predictive markers02 predictive effect size(s)00.9 for X1*X21.5 for X1*X2 predictive M+ group sizeN/A~0.25 * n = ~250~0.5 * n = ~500 # prognostic markers33 prognostic effect size(s)0.5, 0.5, -0.5 for X1, X2, X7 0.5 for X2*X7 0.5, 0.5, -0.5 for X1, X2, X7 0.5 for X2*X7 modellogit model logit model with subject- specific effects a i and (a i, b i )

56
Data Generation: GUIDE 4 5/20/2014 (MBSW) 56 AttributeM1M2M3 simulations (datasets)1000 n100 p response typebinary predictor typecategorical (3 levels)* predictor correlation000 treatment assignment~1:1 (pl:trt) placebo response treatment effect000.2 # predictive markers220 predictive effect size(s)0.2, 0.15 for X1, X for X1*X2 0.4 for X1*X2N/A predictive M+ group size~0.36 * n = ~360 in strongest M+ group (but not explicitly stated) ~0.36 * n = ~360 (but not explicitly stated) N/A # prognostic markers042 prognostic effect size(s)N/A0.2 for X3, X for X1*X2 0.2 for X1, X2 modellinear model (on probability scale)

57
Data Generation: QUINT 5 5/20/2014 (MBSW) 57 AttributeModel AModel B***Model C***Model D***Model E simulations (datasets)100 n200, 300, 400, 500, 1000 p5, 10, 20 response typecontinuous* predictor typecontinuous (multivariate normal)** predictor correlation0, 0.2 treatment assignment~1:1 (trt 1:trt 2) treatment 1 response20*** 18.33***30*** treatment 2 effect-2.5, -5, -10*** 0*** # predictive markers12331 predictive effect size(s)5, 10, 20*** 2.5, 5, 10*** predictive M+ group size~0.16 * n (but not explicitly stated)*** ~0.38 * n (but not explicitly stated)*** ~0.16 * n (but not explicitly stated)*** ~0.5 * n (but not explicitly stated)*** # prognostic markers1***2***3*** 1*** prognostic effect size(s)20*** 21.67***10*** model“tree model”

58
Data Generation: Interaction Trees 6 5/20/2014 (MBSW) 58 AttributeModel AModel BModel CModel D simulations (datasets)100 n 450 test sample method (300 for learning sample, 150 for validation sample), 300 bootstrap method p4444 response typeTTE (censoring rates = 0%, 50%) predictor typeordinal for X1 and X3, categorical for X2 and X4 predictor correlation0000 treatment assignment???? placebo response0.135 treatment effect2*2*2*2*2*2*2*2* # predictive markers0222 predictive effect size(s)N/A0.223 for X1* for X2* to for X1* ** to for X2* ** 0.5 for X1* 2 for X2* predictive M+ group size N/A~0.25 * n in strongest M+ group (but not explicitly stated) not explicitly stated**~0.25 * n in strongest M+ group (but not explicitly stated) # prognostic markers2000 prognostic effect size(s)0.223 for X1* for X2* N/A modelexponential model

59
Perf. Measurement: SIDES (2011) 1 Selection rate, that is, the proportion of simulation runs in which >1 subgroup was identified. o Complete match rate: Proportion of simulation runs in which the ideal subgroup was selected as the top subgroup (computed over the runs when at least one subgroup was selected). o Partial match rate: Proportion of simulation runs in which the top subgroup was a subset of the ideal subgroup (computed over the runs when at least one subgroup was selected). Confirmation rate, that is, the proportion of simulation runs that yielded a confirmed subgroup (which is not necessarily identical to the ideal subgroup). In each run, the top subgroup was identified in terms of the treatment effect p-value in the training data set (if at least one subgroup was selected). The subgroup was classified as ‘confirmed’ if the treatment effect in this subgroup was significant at a two-sided 0.05 level in both test data sets. Treatment effect fraction defined as the fraction of the treatment effect (per patient) in the ideal group, which was retained in the top selected or confirmed subgroup. The fraction was defined as follows: 5/20/2014 (MBSW) 59

60
Perf. Measurement: SIDES (2014) 2 Probability of a complete match Probability of a partial match o Probability of selecting a subset o Probability of selecting a superset Treatment effect fraction (updated definition, not weighted by group sizes): 5/20/2014 (MBSW) 60

61
Perf. Measurement: Virtual Twins 3 5/20/2014 (MBSW) 61

62
Performance Measurement: GUIDE 4 5/20/2014 (MBSW) 62

63
Performance Measurement: QUINT 5 (RP1a) Probability of type I errors (RP1b) Probability of type II errors (RP2) Recovery of tree complexity. Given an underlying true tree with a qualitative treatment– subgroup interaction that has been correctly detected, the probability of successfully identifying the complexity of the true tree. (RP3) Recovery of splitting variables and split points. Given an underlying true tree with a qualitative treatment–subgroup interaction that has been correctly detected, probability of recovering the true tree in terms of the true splitting variables and the true split points (RP4) Recovery of the assignments of the observations to the partition classes 5/20/2014 (MBSW) 63

64
Perf. Measurement: Interaction Trees 6 Frequencies of the final tree sizes Frequency of (predictor) “hits” Bias assessment: the following were calculated for the pooled training and test samples and the validation samples o the likelihood ratio test (LRT) for overall interaction o the logrank test for treatment effect within the terminal node that showed maximal treatment efficacy (for presentation convenience, the logworth of the p- value, which is defined as -log 10 (p-value), was used). 5/20/2014 (MBSW) 64

65
Predictive Biomarker Project Data Generation Web interface Standard datasets BSID Open methods Standard output Performance Measurement Web interface Standard summary 5/20/2014 (MBSW) 65

Similar presentations

© 2017 SlidePlayer.com Inc.

All rights reserved.

Ads by Google