Presentation is loading. Please wait.

Presentation is loading. Please wait.

Context-Dependent Network Agents EPRI/ARO CINS Initiative CDNA Consortium CMU, RPI, TAMU, Wisconsin, UIUC.

Similar presentations


Presentation on theme: "Context-Dependent Network Agents EPRI/ARO CINS Initiative CDNA Consortium CMU, RPI, TAMU, Wisconsin, UIUC."— Presentation transcript:

1 Context-Dependent Network Agents EPRI/ARO CINS Initiative CDNA Consortium CMU, RPI, TAMU, Wisconsin, UIUC

2 The CDNA Consortium Carnegie Mellon University Prof. Pradeep Khosla Prof. Bruce Krogh Dr. Eswaran Subrahmanian Prof. Sarosh Talukdar Rensselaer Polytechnic Institute Prof. Joe Chow Texas A&M University Prof. Garng Huang Prof. Mladen Kezunovic University of Illinois at Urbana- Champaign Prof. Lui Sha University of Minnesota Prof. Bruce Wollenberg

3 CDNA Objective n Improve u agility and robustness (survivability) of large-scale dynamic networks u that face new and unanticipated operating conditions. n Target Networks: u U.S. Power Grid u Local networks

4 CDNA Approach n Improve u decision-making competence of components distributed throughout the network, u particularly existing and future control devices, such as relays, voltage regulators and FACTS.

5 Why CDNA? n centralized real-time control is u infeasible in many situations because of the distribution of information and growing number of independent decision makers on the grid u intractable - robust control algorithms simply don’t scale, the problems are NP hard u undesirable - we contend that centralized solutions are less robust against major network upsets and less adaptive to new situations

6 Why CDNA? (cont’d.) n control devices are already pre-programmed for anticipated situations BUT “one-size fits all” strategies are conservative in most cases, and wrong in some (the most critical!) situations n necessary communication and computation technology for CDNA exists today

7 Key Research Issues n modeling u operating modes u contingencies u impact of restructured power systems u device capabilities/influence

8 Key Research Issues - 2 n state estimation u using local information u network state estimation u real-time constraints n hybrid control u adaptive mode switching u coverage

9 Key Research Issues - 3 n learning u distributed learning u state-space decomposition n coordination u collaboration strategies u moving off-line techniques for asynchronous algorithms on- line

10 Decentralized Large Area Power System Control Bruce Wollenberg University of Minnesota

11 Objectives n Research goal is to show how all standard functions built on a power flow calculation can be accomplished without a large area (centralized) model and computer system n Each region of the power system retains its own control system, models it own power network and communicates with immediate neighbors n Functions that now require central computing u Security Analysis u Optimal Power Flow u Available Transfer Capability

12 Typical Power Pool or ISO Trends: - Getting larger - Standard data formats - Less functionality in regional systems Examples: - California ISO - Midwest ISO

13 Networked Control Systems - Region can be any size - Can extend to any number of regions - Aggregate has same functionality as large area control system - Can new functionality be added that would not be available in a central system?

14 Collaborative Nets Eduardo Camponogara and Sarosh Talukdar Institute for Complex Engineered Systems Carnegie Mellon University

15 Controlling Large Networks Operating goals fall into categories: Limitations: Control Solution: n Costs & profits n Safety n Regulations n Equipment Limits n No organization can cope with all operating goals n Need of diverse skills n Multitudes of agents n Delegate goals to separate organizations Organization: Agent: A network of agents and communication links. Any entity that makes and implements decisions such as relays, control devices, and humans.

16 Multiple Organizations in the Power Grid Governors, exciters optimization soft. Agents Generator Control Security Systems Relays Protection Systems Simulation & learn. tools, humans Goals Keep equipment under limits Reduce cost s.t. constraints Prevent cascading failures Reaction Time 0.01 to 0.1secs Seconds Hours, days Low Agent Skills High Large Number of Agents Small Fast Agent Speed Slow

17 Organizations Do Not Collaborate Generator Control Security Systems Protection Systems Current Scenario: n Agents in separate organizations do not “talk” n Agents might work at cross-purpose n Organizations might interfere with one another How do we make individual agents more effective? How do we prevent interference between organizations?

18 Improving Overall Performance of Nets The suggested answer is based on: Generator Control Security Systems Protection Systems 1.) The use of a common framework to specify agent tasks. 2.) The implementation of a sparse, collaborative net that can cut across the hierarchic organizations. 3.) The design of collaboration protocols to promote effective exchange of information. C-Net

19 What Is A Collaborative Net? A flat organization of dissimilar agents that can integrate hierarchic organization. Properties: n Agents are autonomous within the C-Net. They have initiative, make and implement decisions. n Agents collaborate with their neighbors. The collaboration protocol determines: u what information is exchanged, u in which way, and u how agents make use of it. Advantages: Disadvantages: n Quick n Fault Tolerant n Open n No structural coordination. if necessary, it can emerge from the collaboration protocol. n Unfamiliar.

20 The Rolling Horizon Formulation A framework to solve dynamic control problems as a series of static optimization problems. The dynamic control problem The steps of the rolling horizon formulation: 1.) Choose a horizon [t 0,..,t N ], I.E. a set of time points where t 0 is the current time. 2.) Let x(t n ) be the state predicted at time t n. x(t 0 ) is the current state. 3.) Let u(t n ) be the planned actions at time t n. 4.) Let X=[x(t 0 ),…,x(t N )] and U=[u(t 0 ),…,u(t N )] 5.) Choose a model to predict x(t n+1 ) from x(t n ) and u(t n ). Possibly, a discrete approximation of the dynamic equations (e.g., Euler’s step). Minimize f(x,dx/dt,u,t) Subject to h(x,dx/dt,u,t)=0 g(x,dx/dt,u,t)<=0 The static opt. problem (P) Minimize f(X,U) Subject to H(X,U) = 0 G(X,U) <= 0 The prediction model is embedded in the H(X,U). G(X,U) approximates the operating constraints in g(x,dx/dt,u,t).

21 The Rolling Horizon Algorithm 1.) The current time it t 0. 2.) Sense the current state x(t 0 ) 3.) Instantiate the static optimization problem (P). 4.) Solve (P) to obtain the control actions U=[u(t 0 ),…,u(t N )]. 5.) Implement the control action u(t 0 ). 6.) Pause and let the physical network progress in time. The horizon “rolls” forward. 7.) Repeat from step 1. n A model is used to predict the future state of the physical network over a set of discrete points in time (horizon). n An optimization procedure computes the control actions, over the horizon, that minimize “error.” Steps of the Algorithm: n The horizon has to be long enough to avoid present actions with poor long-term effects. n Accuracy of the prediction model. Design Issues:

22 t0t1t4 Time Control t2t3 now The Rolling Horizon Plan ahead model predicted control implemented control

23 t0t1t4 Time Control t2t3 now plans at t0 plans at t1 The Rolling Horizon Update plans frequently

24 A Framework for Specifying Agent Tasks Break up the static optimization problem, (P), into a set of M small, localized subproblems, {(P m )}. Assemble M agents into a C-Net, so that each agent matches one subproblem. Agent m and its subproblem (P m ) It has partial perception of, and limited authority over, the physical network. Neighborhood variables (y m ) Variables sensed or set by neighbors. Proximate variables (x m,u m ): It senses the values of a subset x m of x. It sets the values of a subset u m of u. Remote variables (z m ): All the other variables. (P) (P 1 ) (P 3 ) (P 2 ) (P 4 ) Ag 1 C-Net

25 Matching Agents to Subproblems The rolling horizon formulation of (P m ) Minimize f m (X m,U m,Y m,Z m ) Subject to H m (X m,U m,Y m,Z m ) = 0 G m (X m,U m,Y m,Z m ) <= 0 The matching between agent-m and its subproblem (P m ) Exact: If its not sensitive to remote vars. Near: If it is weakly sensitive to remote vars. f m = f m (X m,U m,Y m ) H m = H m (X m,U m,Y m ) G m = G m (X m,U m,Y m )

26 Collaboration Protocols A protocol prescribes: a) the data exchanged by agents, b) in which way, and c) how agents use the data to solve their problems. Versions Voting Proximate Exchange Each agent broadcasts its plans to nearby agents which, in turn, take these plans into account. Semi-synchronous, semi-parallel (mutual help). Synchronization between neighbors. Parallel work if agents are non-neighbors. In setting the values of its controls, each agent takes the votes of its neighbors into account. Asynchronous, parallel. Two protocols

27 Equivalence and Convergence Two Questions: Equivalence: When are the solutions to the network of subproblems, {(P m )}, solutions to (P)? Sufficient conditions for equivalence and convergence: The C-Net must provide complete coverage of the network.1.) Coverage: The matching of agents to subproblems must be exact.2.) Density: (P) must be convex.3.) Convexity: (P) must be strictly feasible.4.) Feasibility: The agents must use an interior-point-method.5.) Int-Pt-Mtd: The agents run the semi-synchronous, semi-parallel protocol.6.) Serial Work: Convergence: When does the effort of the collaborative agents converge to a solution of {(P m )}?

28 Relaxing Sufficient Conditions in Practice We believe that the following conditions can be relaxed in practice: Near matching of agents to problems are likely to be adequate.1.) Density: It is impractical in real-world networks.2.) Convexity: Serial work within a neighborhood is too slow.3.) Serial work: A prototypical network: A forest of pendulums. - One agent at each pend. - Agents control two forces: Horizontal & Orthogonal. - Agents collaborate with nearest neighbors.

29 The Dynamic Control Problem Problem: Drive pendulums to the pre-disturbance mode, that is, minimize cumulative error (from desired trajectory) and total control-input cost. Minimize Subject to Three Control Solutions: C2 C-Net C1 A centralized, nonlinear optimization package that solve the stat. opt. prob. (P). A centralized, feedback linearization controller. A collaborative net, with one agent at each pendulum, that solves {(P m )}.

30 C-Net and C1: Experimental Set-up Goal: Scenarios: 2-Pendulum Forest Evaluate the loss in quality of the Collaborative Net solution. Set-up:C-Nets and C1s restore synchronous mode of pendulums. At each sample time t, 1.) solve the network of subproblems, {(P m )}, with the C-Net, 2.) record the obj-function evaluation of the C-Net, F(C-Net), 3.) solve the static optimization problem, (P), with C1, and 4.) record the obj-function evaluation of C1, F(C1). Output Data:A list of obj-function-evaluation pairs [F(C-Net),F(C1)]. Place pendulums in a line to form forests of 2 to 9 pendulums. 3-Pendulum Forest Add 1 Pend.

31 C-Net and C1: Results C-Net Excess: F(C-Net) is the obj-function evaluation attained by the C-Net. F(C1) is the obj-function evaluation attained by controller C1. The difference in quality between the C-Net and C1 solutions. C-Net excess = [F(C-Net) – F(C1)] / F(C1) C-Net Penalty:The mean value of the C-Net excess. C-Net Penalty (%) Number of Pendulums C-Net penalty is low

32 C-Net and C2: Experimental Set-up Goal: Scenario: Evaluate the performance of the C-Net and the feedback linearization controller, C2, a traditional control technique. Set-up:C-Net and C2 restore synchronous mode of pendulums. Output Data:The cumulative error and input-cost, f(x,u), for the C-Net & C2. A forest with 9 pendulums placed in grid.

33 C-Net and C2: Results Objective: Control-Input Cost (b) Objective Function Evaluation: f(x,u) C2 (feedback lin) C-Net 10e-49.56 11.89 10e-3 10.49 12.32 10e-217.05 16.00 10e-1 82.64 32.07 The lower the f(x,u), the better the solution Minimize C-Net performance improves

34 C-Net and C2: Trajectory of Pendulums Pendulums under control of C2 (feedback linearization) Pendulums under control of the C-Net C2 immediately drives pendulums to the desired trajectory. The C-Net waits until it becomes cheaper to drive pendulums.

35 Conclusion The experiments show that C-Nets are promising. Current research effort: n Development of collaboration protocols that allow agents to work asynchronously and in parallel, at their own speed. - Use of safety margins to guarantee feasibility, and foster effective work between slow and fast agents. n A taxonomy of collaboration protocols. What else have we done? n Employed C-Nets to recover synchronous operation of generators in power networks IEEE-14, -30, -57. n Preliminary work on the decomposition of (P) into {(P m )}: - Models and algorithms to specify “neighborhood” perception.

36 Hybrid Control Strategies


Download ppt "Context-Dependent Network Agents EPRI/ARO CINS Initiative CDNA Consortium CMU, RPI, TAMU, Wisconsin, UIUC."

Similar presentations


Ads by Google