Download presentation
Presentation is loading. Please wait.
Published byDeddy Atmadja Modified over 6 years ago
1
Hierarchical Virtual Power Plants: Architecture and Mechanisms for Bottom-Up and Horizontal Coordination for Peer-to-Peer Power Management and Markets Prof. Dr. David E. Bakken School of Electrical Engineering and Computer Science Washington State University Pullman, Washington, USA Siemens AG, Corporate Technology München, Deutschalnd 4 Oktober 2018 © 2018 David E. Bakken 1
2
Outline Introduction Context: Virtual Power Plants Overview
Bakken: A Computer Scientist TUM & Siemens Interactions Context: Virtual Power Plants Overview Severe Shortcomings of Horizontal Coordination Today: Consensus DCBlocks: A Horizontal Coordination Framework Example: Using Groups Conclusions and Future Work © 2018 David E. Bakken
3
Context IANAPP (power person): Computer Scientist
Core background: fault-tolerant distributed computing Research lab experience (BBN) with wide-area middleware with QoS, resilience, security, …. for DARPA/military Working with Anjan Bose since 1999 on wide-area data delivery issues appropriate for RAS and closed-loop applications GridStat (1999-present) GridSim ( ) GridCloud (2012-present) DCBlocks (2014-present) GridFog (2016-present) WSU has #2 or #3 power research program in the US May, 2014 | ISBN: © 2018 David E. Bakken
4
TUM & Siemens Interactions
Prof. Dr. Hans-Arno (“Arno”) Jacobsen Alexander von Humbolt Professor (Informatics) Chair of Applications and Middleware Systems Working on Energy Informatics/etc w/Siemens Dr. Metzger’s advisee/intern Jose Rivera was in Arno’s group (PhD 2018) Martin Jergler (PhD 2017) in Arno’s group (Siemens now) MS student from Balkans a few years ago We knew of each other’s work since the 1990s 2001+: I meet Siemens folks at power meetings! 2014: pinged Siemens authors & Arno about my visit Created a workshop out of my visit, plus Arno 2017: Dr. Fleischer visits WSU (November) 2018: Keynote at TUM workshop last week 2019: Hopeful sabbatical year starts at TUM Hans von Fischer Senior Fellow funding crucial © 2018 David E. Bakken
5
Outline Introduction Context: Virtual Power Plants Overview
Bakken: A Computer Scientist TUM Interactions Siemens Interactions Context: Virtual Power Plants Overview Severe Shortcomings of Horizontal Coordination Today: Consensus A Horizontal Coordination Framework Example: Using Groups Conclusions and Future Work © 2018 David E. Bakken
6
Virtual Power Plants (VPP)
Enables a group of DER units to have the same visibility, control, and market functionality as the conventional transmission-connected power plants. Dispatchable and non-dispatchable (stochastic) Controllable or flexible load (CL or FL) … Many different definitions, none widely accepted Characterization of a DER as a VPP [KBH+09] Virtually all VPP papers seem to assume a top-down organization with control+market signals going only downwards © 2018 David E. Bakken
7
VPPs are Attractive Individual DERs will be able to gain access and visibility across all the energy markets, and will benefit from VPP market intelligence in order to optimize their position and maximize revenue opportunities. System operation will benefit from optimal use of all available capacity and increased efficiency of operation. VPPs thus seem quite appealing Allow generation to be places at/near needed Ergo reduces the “kilometers times megawatts” problem © 2018 David E. Bakken
8
Virtual Power Plants (cont.)
VPP Consists of three main parts Generation Technology (skipping here…) Energy Storage Technologies (skipping here…) Information and Communication Technologies I argue: not just communication, but (robust) coordination is required Allows peers (solar panel owners, etc.) to agree on offerings to the market: horizontal coordination Hierarchies: smart parking lots, smart buildings with storage, …. Aggregated up a hierarchy: bottom up organization/cooperation/federation © 2018 David E. Bakken
9
Outline Introduction Context: Virtual Power Plants Overview
Bakken: A Computer Scientist TUM Interactions Siemens Interactions Context: Virtual Power Plants Overview Severe Shortcomings of Horizontal Coordination Today: Consensus DCBlocks: A Horizontal Coordination Framework Example: Using Groups Conclusions and Future Work © 2018 David E. Bakken
10
Background Dominant architecture in power grid: centralized control center (CC) With limited local control: protection, transformer tap changes, reactive power control Big changes in the “smart grid” need decentralized apps Renewables Much larger #sensors in field Faster response than round trip to CC Intermittent loads (batteries: charging and DR) CC is single point of failure AND attack © 2017 David E. Bakken
11
Challenges Abound! Challenges for CC based monitoring and control
Large amount of measurement data Large set of system variables Intermittent nature of DERs (e.g. wind farms) and battery operated loads (e.g. EVs) Slow control action response Challenges for completely local control Based on local disturbance and limited network visibility Possible cascading effect on the neighboring areas © 2018 David E. Bakken
12
Vision for Decentralized Apps
Slow Not Scalable Optimal Prone to failures Local Fast Non-optimal Hard coded May fail for unexpected Distributed, Coordinated and Hierarchal Fast Scalable Sub-Optimal Fault-tolerant Supports Big data Supports IoT Existing Monitoring and Control Proposed Monitoring and Control © 2018 David E. Bakken
13
Decentralized Apps Decentralized apps are here!
“consensus” appearing in power papers Issue: how to combine one value from each cooperating decentralized process into the same scalar value at each process Power definition: math to merge a vector of values into a scalar Assumes perfect communications Informatics definition: how to have a group of cooperating processes agree the scalar value, despite Variable network delays Dropped Messages Failures of processes, network links Etc. etc. etc …. Make Remedial Action Schemes (RAS) dynamic Now configured on install: inflexible, not adaptive What happens when power topology and operating point changes? Cyber topology changes (node or link failure)? Can be hierarchical based on power topology © 2018 David E. Bakken
14
How to agree on a value? “Yes” or “No”
Only based on messages seen locally P1 P2 P3 © 2018 David E. Bakken
15
Algorithm #0: Naïve N Y Y P1 P2 Y N Y N P3
Each process sends all others their choice/vote Combine in some deterministic way: majority, unanimous or abort, …. N Y Y P1 Y N P2 P3 N Y Limitation: Assumes Perfect Communications © 2018 David E. Bakken
16
Distributed Computing is HARD
Two huge facts of life: Variable (computer) network delay Partial failures So different cooperating processes can see Different message arrival order Different failures Different timeouts Different group membership Very subtle boundary cases for power engineers to program! Can result in inconsistent decisions. In the last few years I have asked at least a DOZEN power engineers who have built and fielded decentralized apps. NONE of them handled even the simplest boundary cases as motivated above. © 2018 David E. Bakken
17
Problem #1a for Naïve Algorithm #0
Y Y P1 Y N P2 P3 N Y Resulting vectors: P1, P3 [Y,N,Y] P2: [Y,N] Problem: Not all-or-none (“atomic”) message delivery to group Note: TCP is no panacea (1) only a partial solution (2) not generally used due to inefficiency/ delays (3) can’t use with multicast © 2018 David E. Bakken
18
Problem #1b for Naïve Algorithm #0
Issue: can’t wait forever for a message (may never come) Normal solution: set a timeout, give up if it fires N Y Y P1 Y N P2 P3 Y N Result: P1, P3 [Y,N,Y] P2: [Y,N] Problem: Not all-or-none (“atomic”) message delivery to group Note: TCP of no help here: it has to time out at some point © 2018 David E. Bakken
19
WAY more than Problem 1 can GO WRONG (redux)
Two huge facts of life: Variable (computer) network delay Partial failures So different cooperating processes can see/detect Different message arrival order Delivery of a message to some, but not all, processes Different failures Different timeouts Different group membership Very subtle boundary cases for power engineers to program! Can result in inconsistent decisions. © 2018 David E. Bakken
20
Distributed Coordination R&D
Since 1979! Theoretical papers, most algorithms never programmed Papers very hard for a MSCS student to understand Impossible for a power engineer to deeply understand, or even find Ergo invariably lots of boundary cases being missed across the industry Results in less resilient systems © 2018 David E. Bakken
21
Complexity of Dist. Coord. Algorithms
Consensus – Processes agree on one or more values from a set of proposed values. Name Failure Model Computational Complexity Message complexity Time complexity (Number of rounds) Scalar Consensus (also Vector, Matrix) Crash, Omission, Byzantine N(F + 1) F + 1 K-set Agreement F/K + 1 Paxos Concensus (2F+1)(N- 3) 4T 2 Phase Commit None 2(N – 1) 2 3 Phase Commit Crash 3(N – 1) 3 to 6 FYI: “vector agreement” is called “interactive consistency” in the literature “matrix agreement” is what Bakken proposes, and internally calls “Supply Agreement” (supporting smart parking lots). Where, N = Number of processes F = Number of faulty processes K = Max possible decision values in K-set algorithm T = Message delay MS Thesis Defense 4/15/2019
22
Attributes of a Candidate Algorithm
Decentralized/dedentralizeable decisions Decision in one place but may rotate/change (e.g., with ICT failure or changing power conditions) Race conditions possible “Inconsistent” decision possible with ad hoc techniques © 2018 David E. Bakken
23
Outline Introduction Context: Virtual Power Plants Overview
Bakken: A Computer Scientist TUM Interactions Siemens Interactions Context: Virtual Power Plants Overview Severe Shortcomings of Horizontal Coordination Today: Consensus DCBlocks: A Horizontal Coordination Framework Example: Using Groups Conclusions and Future Work © 2018 David E. Bakken
24
DCBlocks Decentralized Coordination Blocks
Package up and make useable solutions to the most useful coordination problems (open source) Group discovery/formation Group membership Agreement/consensus Group management Leader election Voting Ordered multicast (ABCAST) Mutual Exclusion Version 0: Shwetha Niddodi, Decentralized Coordination Building Blocks (DCBlocks) for Dedentralized Monitoring and Control of Smart Grids, WSU MS Thesis, December 2015. © 2018 David E. Bakken
25
Use Cases Identified So Far
Power: LOTS (next slide) Smart Transportation: Intersection management for city-wide throughput management Countryside peer-peer (generic) coordination of vehicles Platoons: who will join, who rides “shotgun” (provides reduced drag service to rest on platoon), when to switch, … UAV swarms: collective decisions May later add in specification/constrains for emergent behavior Both Smart Transportation & UAV Swarms need much faster group management than traditonally possible(future research) Maybe imperfect (how imperfect? which apps can handle?) Maybe using position and velocity vector to predict when likely out of range © 2013 David E. Bakken
26
Power Use Cases (So Far…)
Decentralized Power Apps Applicable DC Blocks Pub Distributed Voltage Stability Group Membership, Leader Election, ABCAST [LNS+16] Distributed State Estimation Group Membership , Leader Election , ABCAST [LSA+16] Distributed Remedial Action Schemes Group Membership , Scalar Consensus, ABCAST Decentralized Wind Power Monitoring and Control [KLA+17] Distributed Frequency Control Group Discovery, Group Membership , Leader Election , Matrix Consensus, ABCAST [BAS+17] Decentralized Optimal Power Flow Group management, Matrix Consensus, ABCAST [KGL+18] Decentralized Reactive Power Control Group management, Vector Concensus Decentralized Inverter Control Group management © 2013 David E. Bakken
27
Other Notes [Lee17,Gop18] Groups can be dynamic
Groups can be hierarchical © 2018 David E. Bakken
28
Outline Introduction Context: Virtual Power Plants Overview
Bakken: A Computer Scientist TUM Interactions Siemens Interactions Context: Virtual Power Plants Overview Severe Shortcomings of Horizontal Coordination Today: Consensus DCBlocks: A Horizontal Coordination Framework Example: Using Groups Conclusions and Future Work © 2018 David E. Bakken
29
Centralized RAS The objective of the centralized RAS algorithm is to reduce wind energy output to prevent overloads on the transmission line. The algorithm also aims to minimize the amount of renewable energy that is curtailed. This can be mathematically formulated as an optimization problem.
30
Decentralized Linear State Estimation
Traditional Centralized State Estimation may need additional time to converge and may not converge at all. DLSE is developed as an alternative solution. The main idea of DLSE is to divide the power system computational data into a set of groups, and process in distributed manner to reduce the computational burden. The linear state estimation is a noniterative method, and much faster than traditional centralized methods
31
Decentralized Linear State Estimation
The power system is divided into groups base on: Electrical Distance Connectivity of Boundary Bus Computational Ability Requirements of Supported Applications
32
Decentralized Linear State Estimation
33
Distributed Remedial Action Scheme
The proposed Distributed RAS logic is designed to be implemented in multiple controllers at electric substations connected to data sources in a decentralized manner. In order to achieve the distributed implementation of the RAS, it is required to mathematically distribute the overall objective and constraints among the various computing nodes. The actors also monitor the power system states and check whether there is any overload in the transmission line, and calculate the minimum curtailment value in case of overloads
34
Distributed Simplex Method
The numbers in the last row (except the last two) are called column indicators. The column with the most negative column indicator is the pivot column. For each row, a row indicator is found by dividing the last number in the row by the number in the pivot column. The number in the pivot column and the pivot row is the pivot number. The pivot row is divided by the pivot number to change the pivot to 1, matrix row operations are used to change all other numbers in the pivot column to 0. This is done until all column indicators are non-negative.
35
Outline Introduction Context: Virtual Power Plants Overview
Bakken: A Computer Scientist TUM Interactions Siemens Interactions Context: Virtual Power Plants Overview Severe Shortcomings of Horizontal Coordination Today: Consensus DCBlocks: A Horizontal Coordination Framework Example: Using Groups Conclusions and Future Work © 2018 David E. Bakken
36
Conclusions Decentralized power apps are becoming much more prevalent in recent years Decentralized apps have MANY tricky boundary cases that are being missed by power engineers Can lead to inconsistent decisions Distributed Coordination R&D over the last 4 decades can help, but unusable to EEs WSU is developing applied frameworks to help power engineers: DCBlocks, successors © 2018 David E. Bakken
37
IoTp2p …. … …. …. … Apps Grid-Coord UAV-Coord Car-Coord Semi-Coord
City-Coord Robot-Coord …. IoT-Coord Core Management/ Monitoring Blocks (Mechanisms) IoTp2p Domain-Specific Runtime Systms & Monitoring Mech. Agreement/ Consensus Leader Election Tuneable Failure Deteectors Negotiation … … GridStat OMG DDS OpenFMB …. IP Multicast UDP/IP …. Cyber-sensors, physical sensors, COORD msgs Cyber-sensors, COORD msgs © 2018 David E. Bakken
38
What Can Domain Layers Add?
Domain-specific APIs: simplified, specialized, … Domain-specific policy “languages”: Domain-specific coordination/constraint languages (that generate domain-specific API code; ~FSA-like?) Domain-specific state machines to describe negotiations (generates code) Domain-specific management/monitoring mechanisms …. more TBD …. © 2018 David E. Bakken
39
Features Extensible (Advanced) users can add new policy/constraint language and compiler to generate IoTp2p runtime code I.e., IoTp2p architecture defines containers for them Start out with QuO runtime system for contracts and system condition objects Failure Detection Managed: switch between multiple implementations of same block (best tradeoff given present conditions) Adaptive Support hybrid (multiple) failure models & switching between Cyber-Physical Including criteria for leader election etc. © 2018 David E. Bakken
40
Hybrid Failures TOLERATE benign failures by default
MONITOR for evidence of Byzantine behavior SWITCH failure models (and of course implementations of Blocks such as agreement that use them) Disable some in Byzantine: leader election, others? Some open research here: how to cleanly switch agreement (etc) instances between Different implementations w/same failure model Different failure models Maybe simple: just stop new agreements/blocks, flush ongoing/pending ones, switch failure assumptions, restart.. © 2018 David E. Bakken
41
Reusing Existing Software
Bakken’s Condition: QuO-RTS and SysCond Objects Block::Voting: Voting VM, Mr. Fusion Pub/Sub and reliable delivery: GridStat (can do 1:1) Security mechanisms: managed chain of filters from GridStat IoTp2p baseline (NOT reuse code): DCBlocks “Plan on throwing one version away. You’re going to, anyway”, Fred Brooks, The Mythical Man-Month, ~1975 Others’ Agreement: RAFT, Paxos, … Group Communication: comes with Acka Java (which we built DCBlocks with) © 2018 David E. Bakken
42
References [BAS+17] D. Bakken, A. Askerman, A. Srivastava, P. Panciatici, M. Seewald, F. Columbus, and S. Jiang. “Towards Enhanced Power Grid Management via More Dynamic and Flexible Edge Communications”, in Proceedings of the Fog World Congress, IEEE and OpenFog, Santa Clara, CA, Oct. 30, [Gop18] Shyam Gopal. “Building Distributed Applications for Monitoring and Control on the Resilient Information Architecture Platform (RIAPS) for Smart Grids”. MS Thesis, Washington State University, expected December 2018. [KBH+09] Christophe Kieny, Boris Berseneff, Nouredine Hadjsaid, Yvon Besanger, Joseph Maire. “On the concept and the interest of virtual power plant: Some results from the European project Fenix.” In Proceedings of the 2009 IEEE PES General Meeting, July 26-30, 2009, Calgary, AB, Canada. [KGL+18] V. Krishnan, S. Gopal, R. Liu, A. Srivastava, and D. Bakken. “Resilient Cyber Infrastructure for the Minimum Wind Curtailment Remedial Control Scheme”, IEEE Transactions on Industry Applications, 2018, to appear. - © 2018 David E. Bakken
43
References [LeeX17] HyoJong Lee. Development, Modeling, and Application of PMUs. PhD Dissertation, Washington State University, 2017. [LNS+16] Hyojong Lee, Shwetha Niddodi, Anurag Srivastava, and David Bakken. “Decentralized Voltage Stability Control in the Smart Grid using Distributed Computing Architecture”, in Proceedings of the 2016 IEEE Industry Applications Society Annual Meeting (IAS), Portland, OR 2-6 October, [LSA+16] Ren Liu, Anurag Srivastava, Alexander Askerman, David Bakken and Patrick Panciatici, "Decentralized State Estimation and Remedial Control Action for Minimum Wind Curtailment Using Distributed Computing Platform”, IEEE Transactions on Industry Applications, 23(6), © 2018 David E. Bakken
44
References [KLA+17] V. Krishnan, R. Liu, A. Askerman, A. Srivastava, D. Bakken, and P. Panciatici, “Resilient Cyber Infrastructure for the Minimum Wind Curtailment Remedial Control Scheme”, in Proceedings of the 2017 IEEE Industry Applications Society Annual Meeting, Cincinnatti, Ohio, USA, October 1-5, [Nid15] Shwetha Niddodi, Decentralized Coordination Building Blocks (DCBlocks) for Decentralized Monitoring and Control of Smart Power Grids. MS Thesis, Washington State University, December [PRS07] Pudjianto, D.; Ramsay, C.; Strbac, G., “Virtual power plant and system integration of distributed energy resources”, IET Renewable Power Generation, 1(1), pp , March 2007 © 2018 David E. Bakken
45
BACKUP SLIDES © 2018 David E. Bakken
46
Implementation of DLSE and RAS using DCBlocks
The implementation steps of DLSE using DCBlocks are: Initialization on each computational node. Initial grouping. The nodes join the group using addMember method. Each node starts leader election. Each node starts to send its local power measurement to its group leader and backup leader.
47
WSU SGDRIL Cyber-Physical HIL Lab
© 2018 David E. Bakken
48
Implementation of DLSE and RAS using DCBlocks
The node sends topology change information to the leader. The boundary bus voltage phasor value and tie line current phasor value is obtained from the adjacent group leaders. Leader runs the DLSE monitoring algorithm. Leader runs the RAS to monitor any overload conditions. Note multiple groups here!!!
49
Implementation of DLSE and RAS using DCBlocks
If the leader is detected as faulty, then a secondary back up computational node leader is elected to take over to run algorithms. Requirements for this include: Confirmation of data consistency between the leader and backup node, ensuring a seamless fall-over for any failure. If the status of leader’s communication channel cannot be checked, nodes of the group should vote on switching to the backup. If the fault is detected on the leader’s communication channel, the backup leader can announce to the members that it’s the new leader. In the case of any leadership changes, a new leader election should occur to ensure two leaders are receiving all the data.
50
30 Bus DVS Simulation [Lee17]
4/15/2019 30 Bus DVS Simulation [Lee17] Test Case. Voltage stability problem and regrouping is needed Merge Request Voltage stability problem occurs by major transmission line outage 4-6. This is extreme case so, it should use all reactive power source to resolve voltage stability problem. Group leader request reactive power source within group members. But it is not enough to resolve the problem. Group leader communicates among the neighbor groups to find extra reactive power source. Then, group leader preforms merging action to activate additional reactive power source from neighbor group Here is one scenario to create voltage stability problem. NOTE: dynamic groups! Voltage stability problem occurs on bus 30 by major transmission line outage from 4 to 6. This is extreme case so, it should use all reactive power source to resolve voltage stability problem. Group leader request reactive power source within group members. But it is not enough to resolve the problem. Group leader communicates among the neighbor groups to find extra reactive power source. Then, group leader preforms merging action to activate additional reactive power resource from neighbor group X Template-Primary on 431
51
Timeline of Distributed Computing
1969: BBN builds the ARPANET, the first internet Builds off of ideas from Kleinrock(UCLA), others 1970s: figuring out how to do network programming Rick Schantz joins BBN in 1973; PhD in 1974 1980s: figuring out how to ship data structures around (to support distributed app programming) Middleware came from this 1981: Cronus (BBN), 1989: OMG (CORBA) 1990s: Quality of Service (QoS), mainly Layer3 Bakken at BBN 2000s: Cloud Computing 2010s: Pervasive usage of mobile phones © 2018 David E. Bakken
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.