Download presentation
Presentation is loading. Please wait.
Published byYuliana Tanudjaja Modified over 6 years ago
1
Game Theory in Wireless and Communication Networks: Theory, Models, and Applications Lecture 10 Stochastic Game Zhu Han, Dusit Niyato, Walid Saad, and Tamer Basar
2
Overview of Lecture Notes
Introduction to Game Theory: Lecture 1, book 1 Non-cooperative Games: Lecture 1, Chapter 3, book 1 Bayesian Games: Lecture 2, Chapter 4, book 1 Differential Games: Lecture 3, Chapter 5, book 1 Evolutionary Games: Lecture 4, Chapter 6, book 1 Cooperative Games: Lecture 5, Chapter 7, book 1 Auction Theory: Lecture 6, Chapter 8, book 1 Matching Game: Lecture 7, Chapter 2, book 2 Contract Theory, Lecture 8, Chapter 3, book 2 Learning in Game, Lecture 9, Chapter 6, book 2 Stochastic Game, Lecture 10, Chapter 4, book 2 Game with Bounded Rationality, Lecture 11, Chapter 5, book 2 Equilibrium Programming with Equilibrium Constraint, Lecture 12, Chapter 7, book 2 Zero Determinant Strategy, Lecture 13, Chapter 8, book 2 Mean Field Game, Lecture 14, book 2 Network Economy, Lecture 15, book 2
3
Overview of Stochastic Game
capture dynamic interactions among players whose decisions impact, not only one another, as is the case of conventional static games, but also the so-called state of the game which determines the individual payoffs reaped by the players. Many engineering situations in which the system is governed by stochastic, dynamic states, such as a wireless channel or the dynamics of a power system. Simple overview on basics of stochastic games, so as to provide the fundamental tools needed to address such types of games. For details T. Basar and G. J. Olsder, Dynamic Noncooperative Game Theory, SIAM Series in Classics in Applied Mathematics, Philadelphia, PA, USA, Jan
4
Definition
5
Stochastic Game Procedure
Sorry for this lazy slide
6
Notes For the special case in which there exists only a single stage, the game simply reduces to a conventional static game in strategic form. In general, a stochastic game can have an infinite number of stages, however, the case of finite stages can also be captured, in presence of an absorbing state. In a stochastic game, the payoffs at every stage depend on the state and will change from one state to another. This is in stark contrast to repeated games in which the same matrix game is played at every stage (i.e., there is only one state in a repeated game). For the case in which there is only one player (i.e., a centralized approach), the stochastic game reduces to a Markov decision problem. the transition probability depends on both the state and the actions of the players. In some cases, the transition probability may depend on the action of only one player, this is a special case that is known as single-controller stochastic game. We also note that the evolution of the state can follow a Markov process or differential equations.
7
Definitions
8
Payoff
9
ε-Nash equilibrium
10
Properties Two-player, zero-sum stochastic game Theorems
11
Stochastic Markov Game
A stochastic game is a collection of normal-form games that the agents play repeatedly The particular game played at any time depends probabilistically on the previous game played the actions of the agents in that game the states are the games the transition labels are joint action-payoff pairs Thanks for Dana Nau’s slides
12
Stochastic Markov Game
13
Histories and Rewards
14
Strategies
15
Equilibria
16
Equilibria
17
Two-Player Zero-Sum Stochastic Games
18
Backgammon
19
The Expectiminimax Algorithm
20
In practice
21
Summary General definition Stochastic Markov games Major reference
Procedure Payoffs ε-Nash equilibrium Stochastic Markov games Reward functions, equilibria Expectiminimax Example: Backgammon Major reference T. Basar and G. J. Olsder, Dynamic Noncooperative Game Theory, SIAM Series in Classics in Applied Mathematics, Philadelphia, PA, USA, Jan Eitan Altman’s work
Similar presentations
© 2025 SlidePlayer.com Inc.
All rights reserved.