Designing Games for Distributed Optimization Na Li and Jason R. Marden IEEE Journal of Selected Topics in Signal Processing, Vol. 7, No. 2, pp , 2013 Presenter: Seyyed Shaho Alaviani Designing Games for Distributed Optimization Na Li and Jason R. Marden IEEE Journal of Selected Topics in Signal Processing, Vol. 7, No. 2, pp , 2013 Presenter: Seyyed Shaho Alaviani
Introduction -advantages of game theory Problem Formulation and Preliminaries - potential games -state based potential games -stationary state Nash equilibrium Main Results - state based game design -analytical properties of designed game -learning algorithm Numerical Examples Conclusions
Network -Consensus -Rendezvous -Formation -Schooling -Flocking All: special cases of distributed optimization
Game Theory: a powerful tool for the design and control of multi agent systems Using game theory requires two steps: 1- modelling the agent as self-interested decision maker in a game theoretical environment: defining a set of choices and a local objective function for each decision maker 2- specifying a distributed learning algorithm that enables the agents to reach a Nash equilibrium of the designed game Introduction
Core advantage of game theory: It provides a hierarchical decomposition between the distribution and optimization problem (game design) and the specific local decision rules (distributed learning algorithm) Example: Lagrangian The goal of this paper: To establish a methodology for the design of local agent objective functions that leads to desirable system-wide behavior
Connected and disconnected graphs Directed and undirected graphs connecteddisconnected directed undirected Graph
Problem Formulation and Preliminaries
Physics:
Main properties of potential games: 1- a PSNE is guaranteed to exist 2- there are several distributed learning algorithms with proven asymptotic guarantees 3- learning PSNE in potential games is robust: heterogeneous clock rates and informational delays are not problematic
Stochastic games( L. S. Shapley, 1953): In a stochastic game the play proceeds by steps from position to position, according to transition probabilities controlled jointly by two players. State Based Potential Games(J. Marden, 2012): A simplification of stochastic games that represents and extension to strategic form games where an underlying state space is introduced to the game theoretic environment
State Based Game Design: The goal is to establish a state based game formulation for our distributed optimization problem that satisfies the following properties: Main Results
A State Based Game Design for Distributed Optimization: -State Space -Action sets -State dynamics -Invariance associated with state dynamics -Agent cost functions
State Space:
Agent cost functions :
Analytical Properties of Designed Game Theorem 2 shows that the designed game is a state based potential game. Theorem 2: The state based game is a state based potential game with potential function
Theorem 3 shows that all equilibria of the designed game are solutions to the optimization problem.
Question: Could the results in Theorem 2 and 3 have been attained using framework of strategic form games? impossible
Learning Algorithm We prove that the learning algorithm gradient play converges to a stationary state NE. Assumptions:
asymptotically converges to a stationary state NE.
Example 1: Consider the following function to be minimized Numerical Examples
Example 2: Distributed Routing Problem source destination m routes Application: the Internet Amount traffic Percentage of traffic that agent i designates to route r
Then total congestion in the network will be
R=5 N=10 Communication graph
Conclusions: -This work presents an approach to distributed optimization using the framework of state based potential games. -We provide a systematic methodology for localizing the agents’ objective functions while ensuing that the resulting equilibria are optimal with regards to the system level objective function. -It is proved that the learning algorithm gradient play guarantees convergence to a stationary state NE in any state based potential game -Robustness of the approach
MANY THANKS FOR YOUR ATTENTION