Download presentation

Presentation is loading. Please wait.

Published byDevan Tewksbury Modified over 3 years ago

1
Pinpointing the security boundary in high-dimensional spaces using importance sampling Simon Tindemans, Ioannis Konstantelos, Goran Strbac Risk and Reliability Modelling of Energy Systems 12 th November 2014, Durham University

2
Off-line support for operational security Day ahead and real time operation operating conditions forecasts contingencies Security analysis actions severe computational constraints 1. Anticipate2. Analyse3. Classify Offline analysis (e.g. week ahead) Monte Carlo sampling of operating conditions contingencies dynamic simulation impact analysis machine learning data driven heuristics

3
High-dimensional DT with training errors Decision trees for security studies insecure secure* secure parameter 1 parameter 2 Two-dimensional example DT image courtesy of Pepite

4
The security boundary parameter 1 parameter 2 Quality of predictive classification (secure / insecure) Pinpointing the security boundary Which one? Scenarios ‘near’ the security boundary improve prediction quality Importance sampling1.Which states to sample 2.How to sample those states

5
Importance sampling for classification Previous applications have relied on three assumptions: 1.Meaningful definition of ‘distance’ from the security boundary. 2.‘Easy’ sampling distributions. 3.‘Nice’ properties of the security boundary. We propose a data-driven iterative importance sampling method that does not rely on these assumptions. Krishnan et al. (2011), IEEE Transactions on Power Systems Lund et al. (2014), IEEE Transactions on Power Systems

6
What to sample? insecure secure* secure parameter 1 parameter 2 High-dimensional DT with training errorsTwo-dimensional example DT image courtesy of Pepite Defining ‘interestingness’

7
How to sample? Repurpose machine learning process to guide sampling Decision trees express security and interestingness in terms of pre-fault variables Abort evaluation of uninteresting points Possible because of separation of time scales contingencies dynamic simulation impact analysis machine learning sample random ‘external’ conditions complete starting point ~10ms*~1 min N x ~1 min Decision Trees importance sampling filter reject many uninteresting points

8
Considerations and challenges How to control biasing? Biasing parameter b which controls relative populations. b=0.5 is a defensive choice (max 2x slowdown). For very high rejection rates, initial stages are no longer negligible. Weights Weights should be used at every subsequent analysis step. Two-stage filtering Further gains can be made by exploiting gap in effort between sampling of ‘TSO-external’ variables (~ms) and completion of base state (~1 min)

9
Case study Dynamic simulation study based on French EHV grid ~1500 nodes, ~2000 lines 30,334 classifying variables 1970 contingencies 6 security indices (only overloads used) Computation on PRACE Curie HPC 10,000 cores, 24 hours [ ~ 2.5 tCo2] 2GB results file; 10GB decision trees Unbiased sample of 10,044 valid initial conditions PRACE Curie : http://www-hpc.cea.fr/en/complexe/tgcc-curie.htm

10
Case study [contd] ‘Offline’ simulation of importance sampling Use 6,000 states x 1,970 contingencies as an unbiased sample ‘pool’ Process in batches generate trees after each batch Importance sampling acceptance rate average: 24% (1431 of 6000) minimum: 16% (967) [least interesting] maximum: 100% (6000) [most interesting] Validation using 4,044 states x 1,970 contingencies to estimate errors Importance sampling classifiers Unbiased classifiers, using identical computational budget (1431 states/contingency) 500 1000

11
Results: error analysis mean change in error : -0.0012 mean number (1431) misclassification error points analysed increased attention on badly classified contingencies computational budget for naïve implementation decreased average error without IS with IS per contingency:

12
Results: error analysis - continued |dError| > 0.01 only : 101 of 1970 contingencies misclassification error points analysed Focused analysis results in reduction of errors Some trees are worse off mean change in error : -0.016 Most change is for the better

13
Summary and outlook Summary Offline analysis and machine learning can support power system operation Challenge to pinpoint security boundary with finite resources Proposed data-driven importance sampling method that uses ‘interestingness trees’ and accept-reject sampling Initial trials suggest increase in accuracy for given computational budget Outlook Quantification of speedup Two-stage importance sampling (extra early rejection step) Implementation on HPC platform

14
Thank you This research was supported by the iTesla project within the 7th European Community Framework Programme Partners for this work

16
Two-stage importance sampling contingencies dynamic simulation and impact machine learning sample random ‘external’ conditions complete starting point ~10ms*~1 min Decision Trees stage II importance sampling reject many uninteresting points stage I importance sampling reject many uninteresting points reduced classification N x ~1 min

17
Example decision tree Decision tree for classification 1 if MTAHUP6_S_VL6_PGEN<-0.153883 then node 2 else node 3 2 if ROMAIP6_S_VL6_QSHUNT<54.9982 then node 4 else node 5 3 class = false 4 if TAMAR6COND_11_SC_V<244.195 then node 6 else node 7 5 if BXLIEL61ZGRA6_ACLS__TO__ZGRA6P6_S_VL6_V<242.056 then node 8 else node 9 6 if ANSERL61PRRTT_ACLS__TO__ANSERP6_S_VL6_Q<-47.9415 then node 10 else node 11 7 class = false 8 class = true 9 class = false 10 class = false 11 if BOCTOL71N_SE1_ACLS__TO__N_SE1P7_S_VL7_V<408.882 then node 12 else node 13 12 class = false 13 class = true

18
Importance sampling Importance sampling deliberately distorts the sampling of system states to focus on the “important” events (i.e. those that contribute to the risk metrics). Simulation results are corrected for this bias by sample weights. If done correctly, this procedure leads to large speed-ups.

Similar presentations

OK

L Berkley Davis Copyright 2009 MER301: Engineering Reliability Lecture 8 1 MER301: Engineering Reliability LECTURE 8: Chapter 4: Statistical Inference,

L Berkley Davis Copyright 2009 MER301: Engineering Reliability Lecture 8 1 MER301: Engineering Reliability LECTURE 8: Chapter 4: Statistical Inference,

© 2018 SlidePlayer.com Inc.

All rights reserved.

Ads by Google

Ppt on biopotential signals Ppt on l&t company Inner ear anatomy and physiology ppt on cells Ppt on credit policy and procedures Ppt on forms of scientific and technical writing Ppt on biodegradable and non biodegradable trash Ppt on inside our earth Ppt on tcp/ip protocol suite by forouzan Ppt on energy giving food list Ppt on video teleconferencing services